Andrea Baraldi
Andrea Baraldi received his Laurea (M.S.) degree in Electronic Engineering from the University of Bologna, Italy, in 1989, a Master Degree in software engineering from the University of Padova, Italy, in 1994, and a PhD degree in Agricultural and Food Sciences from the University of Naples Federico II, Italy, in 2017. He has held research positions at the Italian Space Agency (ASI), Rome, Italy (2018-2021), Dept. of Geoinformatics (Z-GIS), Univ. of Salzburg, Austria (2014-2017), Dept. of Geographical Sciences, University of Maryland (UMD), College Park, MD (2010-2013), European Commission Joint Research Centre (EC-JRC), Ispra, Italy (2000-2002; 2005-2009), International Computer Science Institute (ICSI), Berkeley, CA (1997-1999), European Space Agency Research Institute (ESRIN), Frascati, Italy (1991-1993), Italian National Research Council (CNR), Bologna, Italy (1989, 1994-1996, 2003-2004). In 2009 he founded Baraldi Consultancy in Remote Sensing, a one-man company located in Modena, Italy. In Feb. 2014 he was appointed with a Senior Scientist Fellowship at the German Aerospace Center (DLR), Oberpfaffenhofen, Germany. In Feb. 2015 he was a visiting scientist at the Ben Gurion Univ. of the Negev, Sde Boker, Israel, funded by the European Commission FP7 Experimentation in Ecosystem Research (ExpeER) project. His main interests center on image pre-processing and understanding, with special emphasis on the research and development of automatic near real-time Earth observation spaceborne/airborne image understanding systems in operating mode, consistent with human visual perception. Dr. Baraldi served as Associate Editor of the IEEE Trans. Neural Networks journal from 2001 to 2006. His awards include the Copernicus Masters Prize Austria 2020, Copernicus Masters - Winner 2015 of the T-Systems Big Data Challenge and the 2nd-place award at the 2015 IEEE GRSS Data Fusion Contest.
Phone: +393292648110
Address: Via GM Barbieri 23, 41124 Modena, Italy
Phone: +393292648110
Address: Via GM Barbieri 23, 41124 Modena, Italy
less
InterestsView All (10)
Uploads
Papers by Andrea Baraldi
and Artificial General Intelligence (AGI), this paper consists of two
parts. In the previous Part 1, existing EO optical sensory imagederived
Level 2/Analysis Ready Data (ARD) products and processes
are critically compared, to overcome their lack of harmonization/
standardization/ interoperability and suitability in a new notion of
Space Economy 4.0. In the present Part 2, original contributions
comprise, at the Marr five levels of system understanding: (1) an
innovative, but realistic EO optical sensory image-derived semantics-
enriched ARD co-product pair requirements specification. First,
in the pursuit of third-level semantic/ontological interoperability, a
novel ARD symbolic (categorical and semantic) co-product, known
as Scene Classification Map (SCM), adopts an augmented Cloud versus
Not-Cloud taxonomy, whose Not-Cloud class legend complies with
the standard fully-nested Land Cover Classification System’s
Dichotomous Phase taxonomy proposed by the United Nations
Food and Agriculture Organization. Second, a novel ARD subsymbolic
numerical co-product, specifically, a panchromatic or multispectral
EO image whose dimensionless digital numbers are radiometrically
calibrated into a physical unit of radiometric measure,
ranging from top-of-atmosphere reflectance to surface reflectance
and surface albedo values, in a five-stage radiometric correction
sequence. (2) An original ARD process requirements specification.
(3) An innovative ARD processing system design (architecture),
where stepwise SCM generation and stepwise SCM-conditional EO
optical image radiometric correction are alternated in sequence. (4)
An original modular hierarchical hybrid (combined deductive and
inductive) computer vision subsystem design, provided with feedback
loops, where software solutions at the Marr two shallowest
levels of system understanding, specifically, algorithm and implementation, are selected from the scientific literature, to benefit from their technology readiness level as proof of feasibility, required in addition to proven suitability. To be implemented in operational mode at the space segment and/or midstream segment by both public and private EO big data providers, the proposed EO optical sensory image-derived semantics-enriched ARD product-pair and process reference standard is highlighted as linchpin for success of a new notion of Space Economy 4.0.
Data and Artificial General Intelligence (AGI), this two-part paper
identifies an innovative, but realistic EO optical sensory imagederived
semantics-enriched Analysis Ready Data (ARD) productpair
and process gold standard as linchpin for success of a new
notion of Space Economy 4.0. To be implemented in operational
mode at the space segment and/or midstream segment by both
public and private EO big data providers, it is regarded as necessarybut-
not-sufficient “horizontal” (enabling) precondition for: (I)
Transforming existing EO big raster-based data cubes at the midstream
segment, typically affected by the so-called data-rich information-
poor syndrome, into a new generation of semanticsenabled
EO big raster-based numerical data and vector-based categorical
(symbolic, semi-symbolic or subsymbolic) information cube
management systems, eligible for semantic content-based image
retrieval and semantics-enabled information/knowledge discovery.
(II) Boosting the downstream segment in the development of an
ever-increasing ensemble of “vertical” (deep and narrow, user-specific
and domain-dependent) value–adding information products
and services, suitable for a potentially huge worldwide market of
institutional and private end-users of space technology. For the
sake of readability, this paper consists of two parts. In the present
Part 1, first, background notions in the remote sensing metascience
domain are critically revised for harmonization across the multidisciplinary
domain of cognitive science. In short, keyword “information”
is disambiguated into the two complementary notions of
quantitative/unequivocal information-as-thing and qualitative/
equivocal/inherently ill-posed information-as-data-interpretation. Moreover, buzzword “artificial intelligence” is disambiguated into
the two better-constrained notions of Artificial Narrow Intelligence as part-without-inheritance-of AGI. Second, based on a better- defined and better-understood vocabulary of multidisciplinary terms, existing EO optical sensory image-derived Level 2/ARD products and processes are investigated at the Marr five levels of understanding of an information processing system. To overcome their drawbacks, an innovative, but realistic EO optical sensory image-derived semantics-enriched ARD product-pair and process gold standard is proposed in the subsequent Part 2.
and Artificial General Intelligence (AGI), this paper consists of two
parts. In the previous Part 1, existing EO optical sensory imagederived
Level 2/Analysis Ready Data (ARD) products and processes
are critically compared, to overcome their lack of harmonization/
standardization/ interoperability and suitability in a new notion of
Space Economy 4.0. In the present Part 2, original contributions
comprise, at the Marr five levels of system understanding: (1) an
innovative, but realistic EO optical sensory image-derived semantics-
enriched ARD co-product pair requirements specification. First,
in the pursuit of third-level semantic/ontological interoperability, a
novel ARD symbolic (categorical and semantic) co-product, known
as Scene Classification Map (SCM), adopts an augmented Cloud versus
Not-Cloud taxonomy, whose Not-Cloud class legend complies with
the standard fully-nested Land Cover Classification System’s
Dichotomous Phase taxonomy proposed by the United Nations
Food and Agriculture Organization. Second, a novel ARD subsymbolic
numerical co-product, specifically, a panchromatic or multispectral
EO image whose dimensionless digital numbers are radiometrically
calibrated into a physical unit of radiometric measure,
ranging from top-of-atmosphere reflectance to surface reflectance
and surface albedo values, in a five-stage radiometric correction
sequence. (2) An original ARD process requirements specification.
(3) An innovative ARD processing system design (architecture),
where stepwise SCM generation and stepwise SCM-conditional EO
optical image radiometric correction are alternated in sequence. (4)
An original modular hierarchical hybrid (combined deductive and
inductive) computer vision subsystem design, provided with feedback
loops, where software solutions at the Marr two shallowest
levels of system understanding, specifically, algorithm and implementation, are selected from the scientific literature, to benefit from their technology readiness level as proof of feasibility, required in addition to proven suitability. To be implemented in operational mode at the space segment and/or midstream segment by both public and private EO big data providers, the proposed EO optical sensory image-derived semantics-enriched ARD product-pair and process reference standard is highlighted as linchpin for success of a new notion of Space Economy 4.0.
Data and Artificial General Intelligence (AGI), this two-part paper
identifies an innovative, but realistic EO optical sensory imagederived
semantics-enriched Analysis Ready Data (ARD) productpair
and process gold standard as linchpin for success of a new
notion of Space Economy 4.0. To be implemented in operational
mode at the space segment and/or midstream segment by both
public and private EO big data providers, it is regarded as necessarybut-
not-sufficient “horizontal” (enabling) precondition for: (I)
Transforming existing EO big raster-based data cubes at the midstream
segment, typically affected by the so-called data-rich information-
poor syndrome, into a new generation of semanticsenabled
EO big raster-based numerical data and vector-based categorical
(symbolic, semi-symbolic or subsymbolic) information cube
management systems, eligible for semantic content-based image
retrieval and semantics-enabled information/knowledge discovery.
(II) Boosting the downstream segment in the development of an
ever-increasing ensemble of “vertical” (deep and narrow, user-specific
and domain-dependent) value–adding information products
and services, suitable for a potentially huge worldwide market of
institutional and private end-users of space technology. For the
sake of readability, this paper consists of two parts. In the present
Part 1, first, background notions in the remote sensing metascience
domain are critically revised for harmonization across the multidisciplinary
domain of cognitive science. In short, keyword “information”
is disambiguated into the two complementary notions of
quantitative/unequivocal information-as-thing and qualitative/
equivocal/inherently ill-posed information-as-data-interpretation. Moreover, buzzword “artificial intelligence” is disambiguated into
the two better-constrained notions of Artificial Narrow Intelligence as part-without-inheritance-of AGI. Second, based on a better- defined and better-understood vocabulary of multidisciplinary terms, existing EO optical sensory image-derived Level 2/ARD products and processes are investigated at the Marr five levels of understanding of an information processing system. To overcome their drawbacks, an innovative, but realistic EO optical sensory image-derived semantics-enriched ARD product-pair and process gold standard is proposed in the subsequent Part 2.
We propose our own definition of fuzzy neural integrated networks. This criterion is proposed as a unifying framework for comparison of 'algorithms. In the first part pf the this paper, classification methods
based on rule sets or numerical data are reviewed, together with specific methods for handling classification in image processing. In the second
part of this paper, several fuzzy neural clustering models are reviewed and compared. These models are: i) Self-Organizing Map (SOM); ii) Fuzzy Learning Vector Quantization (FLVQ); iii) Carpenter-Grossberg-Rosen
Fuzzy Adaptive Resonance Theory (CGR Fuzzy ART); iv) Growing Neural Gas (GNG); and v) Fully self-Organizing Simplified Adaptive Resonance Theory (FOSART).
that semantic content-based image and information retrieval is possible in big EO image databases, allowing users to query and analyse EO data on a higher semantic level (i.e. based on at least basic land cover units and encoded ontologies). This includes: (1) fully automatic semantic enrichment of Sentinel-2 images up to land cover types ready for semantic content-based analysis; (2) the use of suitable database technologies to develop spatio-temporal modelling and querying techniques using encoded ontologies to decrease the complexity of queries for user interaction; (3) a Web interface for human-like queries based on semantic models of the spatio-temporal 4D physical-world domain; and (4) the demonstration of the potential of the generic data &
information cube in future service developments based on different service types.
applied for the segmentation of Magnetic Resonance
(MR) Images. The objective of the work is to state
the effectiveness of a fuzzy-neuro approach for the detection of the small lesions present in thick MR slices
of multiple ,sclerosis patients. The data set included
the Proton Density (PD) , T2, T1 weighted spin-echo
(SE) bands and a new T1-weighted three dimensional
sequence, i.e. the magnetization-prepared rapid gradient
echo (MP-RAGE) of a volunteer. The Fuzzy Learning
Vector quantization (FLVQ) and the Fully Self Organizing
Map (FOSART) models have been used for
the semi-automatic tissue segmentation of the multispectral data set. Both models were trained with the pixels extracted from some labelled areas, interactively selected by a neuro-radiologist on the input raw images. A quantitati:ve comparison between the two neural network model performance has been provided on the base of the labeled areas.
Index Terms — Geometric (shape and orientation) features, human vision, image segmentation, object-based image analysis, Open Geospatial Consortium, planar object, quality indicator.
and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled
knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product.
Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii)
imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based)
feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations.