Could you please provide also the logs before this message, starting from the listing of all steps to be processed? This is only the final log error message that do not contain relevant information that could help us identifying your issue.
Could you please copy to clipboard the Errors for the step that failed? You can provide also the Output result.
Additionally, you could perform the following operations:
connect to the database and reset the S1 pre-processing:
psql -U admin sen4cap
sen4cap=# delete from l1_tile_history where downloader_history_id in (select id from downloader_history where satellite_id = 3);
sen4cap=# update downloader_history set status_id = 2 where satellite_id = 3;
sen4cap=# \q
sudo systemctl restart sen2agri-services
Once you did that, please log for 2-3 hours the sen4cap services messages with:
I work on a virtual machine from CODE-DE.org (CentOS 7 + fresh preinstallment of SEN4CAP V2.0, manually upgraded to V3.0). The sentinel data is mounted via s3fs.
So according to this thread I checked that the flag “processor.l2s1.copy.locally” is set to true (what was already the case).
In my opinion the read & write permissions in the /mnt/archive/ directory should also be correct (sudo chmod -R a+wrx /mnt/archive)!?
What I notice when studying the log file provided yesterday are entries like the following:
“Jan 18 09:31:00 sen4rlp.codede.internal start.sh[25891]: Error: [NodeId: ReadOp@sourceProduct] Specified ‘file’ [/mnt/master_parent/S1B_IW_SLC__1SDV_20210323T055019_20210323T055047_026138_031E72_FDAC.SAFE] does not exist.”
Where does the reference to the “/mnt/master_parent/” directory come from? This does not exist in fact and thus leads inevitably into the emptiness.
I would be very grateful for any helpful tips to fix the problem!
copy the extracted sen4cap-sentinel1-preprocessing-3.0.1.jar into /usr/share/sen2agri/sen2agri-services/modules/
stop sen2agri-services
sudo systemctl stop sen2agri-services
reset the products having status_id 6 in the downloader_history (if any)
psql -U admin sen4cap -c “delete from l1_tile_history where downloader_history_id in (select id from downloader_history where status_id = 6 and satellite_id = 3)”
psql -U admin sen4cap -c “update downloader_history set status_id = 2 where status_id = 6 and satellite_id = 3”
Start the sen2agri-services
sudo systemctl restart sen2agri-services
Could you please also check if you have in your downloader_history table products with other full_path other than /codede/xxxx ?
psql -U admin sen4cap -c “select full_path from downloader_history where full_path not like ‘/codede/%’”. Please let me know if it is the case.
The missing of products having /codede/xxxx in the full_path is normal looking at the configuration of the Sentinel1 - Scientific Data Hub datasource configuration as you used “Symbolic link” for the “Fetch mode”. I think this might be also the issue why the S1 might be failing.
What I suggest is to :
Change the “Fetch mode” from “Symbolic link” to “Direct link to products”
stop sen2agri-services (as in post above)
remove all entries for S1 in the downloader_history :
psql -U admin sen4cap -c “delete from l1_tile_history where satellite_id = 3”
psql -U admin sen4cap -c “delete from downloader_history where satellite_id = 3”
delete the symlinks from /mnt/archive/dwn_def/s1/default/lbm_rlp_kh_tr_wo (or even the full directory)
If you pre-process S2 with MAJA, the above changes/operations should be also performed for Sentinel2 - Scientific Data Hub datasource (using in the sql queries the satellite_id = 1).
I implemented your instructions yesterday evening. In fact, the first S1 products have now already been processed. Many thanks again for this!
But two things still irritate me:
As you already guessed, I configured the data sources for Sentinel2 analogous to the Sentinel1 data and they are pre-processed with MAJA. More precisely, 306 tiles have already been successfully computed, despite the “symbolic links” (501 failed, according to l1_history_table.failed_reson all due to too high cloud cover). Do you have an explanation why “symbolic links” work for S2 and not for S1? Or do I still have to expect problems in the further course despite the successful L2A Atmospheric Correction?
If I remember correctly, I deliberately selected “symbolic links” for the fetch mode, because with “direct links” the downloads (at least) for S2 were not successful and the user manual says that the fetch mode in our deployment scenario should be set to “Symbolic link” or “Direct link to product”. Do you think it would be useful to clarify the corresponding passage in the manual in a future version (section 4.2.1.2 page 40), since in certain circumstances only one of the two seems to work?
Bottom line:
Do you really think I should now apply your above changes/operations to the Sentinel2 data, or will I lose my already processed atmosphere corrections and the L3B products?
Hi, @cudroiu
Could you please advise how to get S1 preprocesssing to work?
It fails on Amplitude Terrain Correction 6-1 without valuable error messages:
Error executing step 6 (Amplitude Terrain Correction) [codes=[1]]
We tried applying that patch from january but nothing changed.
I have attached log file if it can help. sen2agri_services_log_3h.txt (2.3 MB)
Hoping for suggestions.
I don’t know yet why downloads have stopped but three days data that is downloaded seems to be processed correctly (at least for Amplitude as there is not data for Coherence processing downloaded)
Hi Cosmin,
unfortunately I still have problems with Sentinel-1. Not really Preprocessing, but downloads.
It looks like moments after applying that patch, something went wrong with downloader.
Season start is 15.04., I have processed amplitude products for 9.,10.,11.04.
In dwn_def folder I have some data from 9th to 23rd April. In downloader_history database table they have statuses 2 (reason: no previous product) and 5 (reason: null), but these are the newest (21,23.04 that are not in product browser and in l2a-s1 folder).
System log says something about failed downloads:
o.e.s.s.i.DownloadServiceImpl - Page #3 (query 1 of 1) for {site id=5,satellite=S1} returned 0 results
o.e.s.scheduling.LookupJob - At least one query failed for site lv2022 (reason: null). It was saved in ‘/mnt/archive/dwn_def/s1/default/lv2022/failed_queries’ and will be retried later.
o.e.s.scheduling.LookupJob - No S1 products were discarded.
o.e.s.scheduling.LookupJob - [site ‘LV2022’,sensor ‘Sentinel1’] Found 0 products for site lv2022 and satellite S1
o.e.s.scheduling.LookupJob - Actual products to download for site lv2022 and satellite S1: 0
o.e.s.scheduling.LookupJob - Job ‘Lookup.Lookup-s1-lv2022’ completed
o.e.s.services.ScheduleManager - Trigger ‘Lookup.Lookup-s1-lv2022’ completed with code ‘NOOP’
In fact, there are no S1 related failed queries in that folder (there is one S2 failed query there, and both S1 and S2 failed queries in ‘/mnt/archive/dwn_def/s2/default/lv2022/failed_queries’, is that ok?)
I have tried restart and force_download_restart, nothing changed. Could that be related to that strange difference between processed data and L1C data in downloads? What would be a correct way to erase that gap? I would really appreciate any hints how to get downloads working again (S2 is downloading).