L2-S1 Pre-Processing Errors

Hello Sen4CAP Team,

I’ve tried to pre-process S1 images. However, I wasn’t able to get this pre-processing well done. I got everytime an error. I copied the error:

Error executing step 1 (Calibration) [codes=[1]]: [{gpt,-c,256M,-q,8,/mnt/archive/nld_2019_site_test/l2a-s1/SEN4CAP_L2A_S11_V20200212T172417_20200206T172506_VV_088/s1_step_1_1.xml,/mnt/archive/dwn_def/s1/default/nld_2019_site_test/S1B_IW_SLC__1SDV_20200212T172417_20200212T172444_020239_026536_C71C.SAFE,/mnt/archive/dwn_def/s1/default/nld_2019_site_test/S1A_IW_SLC__1SDV_20200206T172506_20200206T172533_031135_039421_D25C.SAFE}]

Thanks if you can give me more informations !

Best Regards,
Guillaume

Dear Guillaume,

Could you please provide also the logs before this message, starting from the listing of all steps to be processed? This is only the final log error message that do not contain relevant information that could help us identifying your issue.

Best regards,
Cosmin

Hello Cosmin,

Thanks for your very fast reply. Are these elements enough for you ?

Hello Guillaume,

Could you please copy to clipboard the Errors for the step that failed? You can provide also the Output result.
Additionally, you could perform the following operations:

  • connect to the database and reset the S1 pre-processing:

psql -U admin sen4cap
sen4cap=# delete from l1_tile_history where downloader_history_id in (select id from downloader_history where satellite_id = 3);
sen4cap=# update downloader_history set status_id = 2 where satellite_id = 3;
sen4cap=# \q
sudo systemctl restart sen2agri-services

Once you did that, please log for 2-3 hours the sen4cap services messages with:

sudo journalctl -fu sen2agri-services > logs_services.txt

After the interval specified, please send us the log file (logs_services.txt).

Best regards,
Cosmin

Hello Cosmin,

Thanks for your response, the first step was enough to resolve the problem.

Best regards,

Hi,

I have the same problem and followed your steps, @cudroiu.

Unfortunately, the first step did not solve the problem for me. So here is my log file: logs_services.txt.zip (862.8 KB)

Can you figure it out?

Thanks in advance for your help!

Perhaps a few more comments:

I work on a virtual machine from CODE-DE.org (CentOS 7 + fresh preinstallment of SEN4CAP V2.0, manually upgraded to V3.0). The sentinel data is mounted via s3fs.

So according to this thread I checked that the flag “processor.l2s1.copy.locally” is set to true (what was already the case).

In my opinion the read & write permissions in the /mnt/archive/ directory should also be correct (sudo chmod -R a+wrx /mnt/archive)!?

What I notice when studying the log file provided yesterday are entries like the following:

“Jan 18 09:31:00 sen4rlp.codede.internal start.sh[25891]: Error: [NodeId: ReadOp@sourceProduct] Specified ‘file’ [/mnt/master_parent/S1B_IW_SLC__1SDV_20210323T055019_20210323T055047_026138_031E72_FDAC.SAFE] does not exist.”

Where does the reference to the “/mnt/master_parent/” directory come from? This does not exist in fact and thus leads inevitably into the emptiness.

I would be very grateful for any helpful tips to fix the problem!

Hello,

Could you please try the jar in the following archive?
sen4cap-sentinel1-preprocessing-3.0.1.zip (108.5 KB)
In order to patch the services do:

  • unzip sen4cap-sentinel1-preprocessing-3.0.1.jar from the above archive
  • backup the file /usr/share/sen2agri/sen2agri-services/modules/sen4cap-sentinel1-preprocessing-3.0.0.jar into another location (just in case)
  • remove /usr/share/sen2agri/sen2agri-services/modules/sen4cap-sentinel1-preprocessing-3.0.0.jar
  • copy the extracted sen4cap-sentinel1-preprocessing-3.0.1.jar into /usr/share/sen2agri/sen2agri-services/modules/
  • stop sen2agri-services

sudo systemctl stop sen2agri-services

  • reset the products having status_id 6 in the downloader_history (if any)

psql -U admin sen4cap -c “delete from l1_tile_history where downloader_history_id in (select id from downloader_history where status_id = 6 and satellite_id = 3)”
psql -U admin sen4cap -c “update downloader_history set status_id = 2 where status_id = 6 and satellite_id = 3”

  • Start the sen2agri-services

sudo systemctl restart sen2agri-services

Could you please also check if you have in your downloader_history table products with other full_path other than /codede/xxxx ?

psql -U admin sen4cap -c “select full_path from downloader_history where full_path not like ‘/codede/%’”. Please let me know if it is the case.

Hope this helps.

Best regards,
Cosmin

Hi @cudroiu.

Great, thanks for your fast and as usual precise feedback!!

I have performed the patch as you described and restarted the services again. Unfortunately, so far it looks like nothing has changed in the output.

However, I also noticed that there is not a single record in the downloader_history table with a /codede/ path. They all start either with

“/mnt/archive/dwn_def/s1/default/” or with “https://apihub.copernicus.eu/apihub/odata/v1/”.

Here is the output of the query: log.txt (262.6 KB)

At least the former corresponds to the configuration of the data sources. Is there something wrong with this?

Hi @jab_lp ,

The missing of products having /codede/xxxx in the full_path is normal looking at the configuration of the Sentinel1 - Scientific Data Hub datasource configuration as you used “Symbolic link” for the “Fetch mode”. I think this might be also the issue why the S1 might be failing.
What I suggest is to :

  • Change the “Fetch mode” from “Symbolic link” to “Direct link to products”
  • stop sen2agri-services (as in post above)
  • remove all entries for S1 in the downloader_history :

psql -U admin sen4cap -c “delete from l1_tile_history where satellite_id = 3”
psql -U admin sen4cap -c “delete from downloader_history where satellite_id = 3”

  • delete the symlinks from /mnt/archive/dwn_def/s1/default/lbm_rlp_kh_tr_wo (or even the full directory)

sudo rm -fr /mnt/archive/dwn_def/s1/default/lbm_rlp_kh_tr_wo

  • start sen2agri-services (as in post above)

The full_paths with “https://apihub.copernicus.eu/apihub/odata/v1/” mean the products do not exist in the /codede/ repository.

If you pre-process S2 with MAJA, the above changes/operations should be also performed for Sentinel2 - Scientific Data Hub datasource (using in the sql queries the satellite_id = 1).

Best regards,
Cosmin

Hi @cudroiu,

I implemented your instructions yesterday evening. In fact, the first S1 products have now already been processed. Many thanks again for this!

But two things still irritate me:

  1. As you already guessed, I configured the data sources for Sentinel2 analogous to the Sentinel1 data and they are pre-processed with MAJA. More precisely, 306 tiles have already been successfully computed, despite the “symbolic links” (501 failed, according to l1_history_table.failed_reson all due to too high cloud cover). Do you have an explanation why “symbolic links” work for S2 and not for S1? Or do I still have to expect problems in the further course despite the successful L2A Atmospheric Correction?

  2. If I remember correctly, I deliberately selected “symbolic links” for the fetch mode, because with “direct links” the downloads (at least) for S2 were not successful and the user manual says that the fetch mode in our deployment scenario should be set to “Symbolic link” or “Direct link to product”. Do you think it would be useful to clarify the corresponding passage in the manual in a future version (section 4.2.1.2 page 40), since in certain circumstances only one of the two seems to work?

Bottom line:

Do you really think I should now apply your above changes/operations to the Sentinel2 data, or will I lose my already processed atmosphere corrections and the L3B products?

Hi, @cudroiu
Could you please advise how to get S1 preprocesssing to work?
It fails on Amplitude Terrain Correction 6-1 without valuable error messages:
Error executing step 6 (Amplitude Terrain Correction) [codes=[1]]
We tried applying that patch from january but nothing changed.
I have attached log file if it can help.
sen2agri_services_log_3h.txt (2.3 MB)
Hoping for suggestions.

Hello,
I think this is related to a recent release of the the SNAP in which they removed an operator.
There are 2 solutions:

  1. Use an older version of SNAP with the updates before this change. This is a little bit hard to find. We have one but is difficult to share.
  2. Use the following jar file as a patch:
    sen4cap-sentinel1-preprocessing-3.0.1.jar.zip (108.7 KB)
    In order to apply it:
  • backup the existing /usr/share/sen2agri/sen2agri-services/modules/sen4cap-sentinel1-preprocessing-*.jar in another location
  • unzip the attached archive and copy sen4cap-sentinel1-preprocessing-3.0.1.jar to /usr/share/sen2agri/sen2agri-services/modules/
  • restart sen2agri-services (sudo systemctl restart sen2agri-services)

Please let me know if it works.

Best regards,
Cosmin

Thank you, Cosmin.this worked.

I don’t know yet why downloads have stopped but three days data that is downloaded seems to be processed correctly (at least for Amplitude as there is not data for Coherence processing downloaded)

Hi Cosmin,
unfortunately I still have problems with Sentinel-1. Not really Preprocessing, but downloads.

It looks like moments after applying that patch, something went wrong with downloader.
Season start is 15.04., I have processed amplitude products for 9.,10.,11.04.
In dwn_def folder I have some data from 9th to 23rd April. In downloader_history database table they have statuses 2 (reason: no previous product) and 5 (reason: null), but these are the newest (21,23.04 that are not in product browser and in l2a-s1 folder).
System log says something about failed downloads:

o.e.s.s.i.DownloadServiceImpl - Page #3 (query 1 of 1) for {site id=5,satellite=S1} returned 0 results
o.e.s.scheduling.LookupJob - At least one query failed for site lv2022 (reason: null). It was saved in ‘/mnt/archive/dwn_def/s1/default/lv2022/failed_queries’ and will be retried later.
o.e.s.scheduling.LookupJob - No S1 products were discarded.
o.e.s.scheduling.LookupJob - [site ‘LV2022’,sensor ‘Sentinel1’] Found 0 products for site lv2022 and satellite S1
o.e.s.scheduling.LookupJob - Actual products to download for site lv2022 and satellite S1: 0
o.e.s.scheduling.LookupJob - Job ‘Lookup.Lookup-s1-lv2022’ completed
o.e.s.services.ScheduleManager - Trigger ‘Lookup.Lookup-s1-lv2022’ completed with code ‘NOOP’

In fact, there are no S1 related failed queries in that folder (there is one S2 failed query there, and both S1 and S2 failed queries in ‘/mnt/archive/dwn_def/s2/default/lv2022/failed_queries’, is that ok?)

I have tried restart and force_download_restart, nothing changed. Could that be related to that strange difference between processed data and L1C data in downloads? What would be a correct way to erase that gap? I would really appreciate any hints how to get downloads working again (S2 is downloading).