Adrien CHAN-HON-TONG, Baptiste CADALEN, Aurélien PLYER, Laurent SERRE

DOI Number: N/A

Conference number: HiSST-2025-052

Sensor-based guidance is required for long-range platforms when GNSS can be denied. To bypass the structural limitations of the classical registration-on-reference-image framework, we offer in this paper to encode the appearance of the surrounding of the target (at all resolutions) from a stack of images of the scene into a deep network. This new framework is showed to be relevant on bimodal scene (e.g. when the scene can or can not be snowy) even if it raises question about the loss of epipolar geometry which is much more understood and mastered than gray-box deep networks.

Read the full paper here

Email
Print
LinkedIn
The paper above was part of  proceedings of a CEAS event and as such the author has signed a publication agreement to have their paper published in the repository. In the case this paper is found somewhere else CEAS always links to the other source.  CEAS takes great care in making the correct content available to the reader. If any mistakes are found  in the listings please contact us directly at papers@aerospacerepository.org and we will correct the listing promptly.  CEAS cannot be held liable either for mistakes in editorial or technical aspects, nor for omissions, nor for the correctness of the content. In particular, CEAS does not guarantee completeness or correctness of information contained in external websites which can be accessed via links from CEAS’s websites. Despite accurate research on the content of such linked external websites, CEAS cannot be held liable for their content. Only the content providers of such external sites are liable for their content. Should you notice any mistake in technical or editorial aspects of the CEAS site, please do not hesitate to inform us.