raj
Raj Thilak Rajan

At TU Delft, I co-direct the Delft Sensor AI Lab on sensor fusion, TU Delft Swarming Lab on drones, and the Lunar Zebro Lab on rovers.

In addition, I am fortunate to be funded by Dutch Research Council (NWO), Rijksdienst voor Ondernemend Nederland (RVO), Netherlands Space Office (NSO) and the European Horizon Programs, to tackle signal processing challenges of diverse inaccesible autonomous systems such as drones, satellites, rovers and automotives.



SPIN

SPIN (2026- present)

Breakthrough technologies for interferometry in space

Partners: TU/e, ASTRON, Radboud, RRL, SRON, UAmsterdam, AAC Clyde Space, ISISpace, UTwente, NLR, RUG

Funding: NWO-NSO (450 kEuros)

SPace-based Interferometry Network (SPIN) represents a unique combination of science, engineering and industry in the Netherlands to prepare Internationally for the key enabling technological challenges in distributed space-based interferometry. It is a field in which The Netherlands has a strong heritage, starting with radio-interferometry in the 1960s. To make the next leaps forward in this area and consolidate this strong position, science and technology are pulled strongly together to respond to the opportunities that are offered various research endeavours in institutes, ongoing activities in industries, and in the programmes of the international space agencies including ESA. With the SPIN collaboration in this proposal the knowledge, experience and technological capabilities are aligned to prepare for the development of the next generation of scientific instrumentation that allows us to tackle fundamental challenges in space-based interferometry.

Team: Dr. R.T.Rajan

AQUAFIND

AQUAFIND (2025- present)

Distributed localisation and formation control of drones

Partners: TNO, SpectX, Deltares, Delta-N

Funding: RVO (1.5MEuros)

AQUAFIND (Aerial Quadcopter Units for Aquatic Flow Investigation and Nautical Data) builds upon recent advancements in drone technology, sensor capabilities, and machine learning algorithms to develop an autonomous aerial drone swarming system for monitoring off-shore assets We develop distributed optimization algorithms which will enable a drone swarm to to estimate their position, time, and to navigate and control autonomously into formations - which is necessary for sensing and monitoring. The drones will employ cooperative algorithms using pairwise ranging between neighbouring drones, and using onboard inertial sensors e.g., IMU along with GPS tags, for positioning and timing. In addition, on-board distributed filtering will ensure formation control of the drones using only neighbourhood information. One the of many challenges is robust inference, control, and stability of the drones in turbulent environmental conditions e.g., strong winds.

Team: Ir. Cansu Yıkılmaz, Dr. R.T.Rajan

SHAPEFUTURE

SHAPEFUTURE (2024- present)

Robust inference and decision making for automated vehicles.

Partners: Various industry, SMEs and academia across Europe

Funding: EU ECS (35 MEuros)

ShapeFuture is an EU-funded project, driving innovation in fundamental Electronic Components and Systems (ECS) for highly automated vehicles, enabling robust, powerful, fail-operational and integrated perception, cognition, AI-enabled decision-making, resilient automation and computing, and communications. Situational Awareness is a key goal, with an ambition to move beyond ego-vehicle intelligence and develop cooperative perception and tracking systems, in which multiple sensing agents jointly perceive obstacles, track dynamic targets, and reconstruct evolving spatiotemporal processes in real time.
We develop methods for distributed sensing and learning for target tracking in populated areas, as well as spatiotemporal tracking and reconstruction of dynamic processes from noisy, partial, and heterogeneous measurements. Our focus is practical: turning uncertain sensor data into trustworthy perception for safer, real-time decisions under communication and computation constraints. We use probabilistic sensor fusion (e.g., Gaussian processes, state-space models) and efficient optimization with statistical machine learning to make uncertainty-aware inference practical in real-time.

Team: Ir. Shao-Hsuan Hung, ir. Yongsheng Han, Ir. Bishwadeep Das, Dr. R.T.Rajan

Moonshot

Moonshot (2023- present)

Science, Technology and Social solutions for Lunar missions

Partners: Dr. Fabio Sebastiano, Dr. Alessandra Menicucci, Dr. Victor Trees

Funding: Faculty of EEMCS, Faculty of AE, Delft Space Institute (300 kEuros)

The Moonshot program shares the ambition of humanity to become an interplanetary species, and in the upcoming years aims to advance both science and technological solutions pertinent to Lunar surface missions. To this end, we bring leading TU Delft research teams together in an interdisciplinary environment to improve the science, technology and social readiness levels of payloads destined for Moon in the upcoming decade, and thus strengthen the Dutch contribution towards this global mission. The program improves the TRLs of 4 Lunar payloads, which stem from the expertise of TUD researchers from various faculties, including LOUPE (Lunar Observatory for Unresolved Polarimetry of Earth), Radiation sensing, Autonomous sensing, and Autonomous Path Planning of swarms. These Lunar payloads could be carried by the TU Delft Lunar Zebro rover (or the Zes Benige Robot), which is a robust swarm platform, capable of scaling and adapting to the needs of the scientific payloads it houses. The interfaces of the rover are customizable, which allows for the technical integration of the payload functionality with the operations of the rover.

Team: Dr. Fabio Sebastiano, Dr. Alessandra Menicucci, Dr. Victor Trees, Dr. R.T.Rajan

Delft Sensor AI Lab

Delft Sensor AI Lab (2022- present)

Bringing Sensor fusion to the Edge

Partners: Dr. Manon Kok (Faculty of ME, TUD)

Funding: TU Delft AI Labs program (1 MEuros)

Sensors are everywhere – measuring, processing and inferring from the environment. We also carry sensors with us personally, wherever we go. These sensors are present in smartphones and activity trackers, and provide information about where we are, how we are moving and what we are doing. Technological advances have made sensors more available and more accurate over recent years, opening up many exciting applications. The field of sensor fusion focuses on combining data from different types of sensors in order to extract more information than that available from each sensor alone. Physical knowledge can be used, for instance about how a system can move over time or about sensor properties. AI can also be used: new models can be established using data from sensors and sensor networks. Sensor AI unites the fields of sensor fusion and AI, bringing physical knowledge into AI to enable the extraction of more information from available sensor data. Delft Sensor AI Lab focuses on developing novel algorithms, and on applying these tools in different fields. Examples include human motion estimation; distributed learning in sensor networks; and navigation of swarms of multiagent systems such as robots, ships, drones and satellites.

Team: Ir. Zhonggang Li, Ir. Ali Emre Balci, Dr. M. Kok, Dr. R.T.Rajan

CRANES

CRANES (2021- 2025)

Distributed localisation of multi-agent systems

Funding: Faculty of EEMCS (239kEuros)

In light of technological advances, the past decade has seen a rise of multi-agent systems (or swarms) in various sectors, including aerospace, robotics, automotive and aviation to name a few. Knowledge of position, timing, and orientation (PTO) is vital information for the healthy operation of any mobile network. Furthermore, it is imperative that any data collected and processed during the mission lifetime be stamped with the PTO information, for prudent inference during post-processing. To this end, this project aims to solve the challenges of multi-target PTO tracking in a mobile network, with intermittent or no external information. The agents in the network must dynamically estimate both their individual PTO, and cooperatively estimate the PTO of their fellow agents in the network in the absence of a centralized master. In the CRANES (Cooperative Relative Navigation of Multi-agent Systems) project, distributed robust Bayesian algorithms for relative navigation will be developed to avoid single-point-of-failure and to minimize processing and communication resources of the agents for practical implementation. The proposed solutions will be scalable for larger networks, and robust against sending, processing and communication errors.

Team: Ir. Ellen Riemens, Dr. R.T.Rajan

ADACORSA

ADACORSA (2020- 2024)

Sensor fusion for autonomous navigation of drones

Partners: Various industry, SMEs and academia across Europe

Funding: EU ECSEL JU (45 MEuros)

ADACORSA (Airborne data collection on resilient system architectures) will develop algorithms to realize efficient, robust, and data-fusion based cost-effective perception and control for autonomous drones. The overarching goal of this project is to provide technologies to render drones as a safe and efficient component of the mobility mix, with reliable capabilities in extended beyond visual line of sight (BVLOS) operations. We develop algorithms for autonomous drone navigation including localization and synchronization for BVLOS scenarios and/or GPS-denied environments, by utilizing RF signals from ground stations and/or in collaboration with other drones.

Team: Ir. Anurodh Mishra, Dr. Mostafa Mohammad Karimi, Dr. R.T.Rajan