4d GIS
10 Followers
Recent papers in 4d GIS
A recent trend concerning archaeological research has focused on producing a real-time methodology for 3D digital models as archaeological documentation within the excavation setting. While such methodologies have now firmly been... more
A recent trend concerning archaeological research has focused on producing a real-time methodology for 3D digital models as archaeological documentation within the excavation setting. While such methodologies have now firmly been established, what remains is to examine how 3D models can be integrated more fully alongside other forms of archaeological documentation. This work explored one avenue by developing a method that combines the interpretative power of traditional archaeological drawings and the realistic visualisation capacity of 3D digital models. An experiment was initiated during archaeological excavations at Uppåkra, Sweden where photographic data was captured to produce 3D digital models through Photoscan. These models were geospatially located within ESRI’s 3D GIS ArcScene where shapefile editing tools were used to draw overtop of their surfaces in three-dimensions. All drawings closely followed the single context method of drawing, were allotted context numbers, and given descriptive geodatabase attributes. This methodology resulted in the further integration of 3D models alongside other forms of archaeological documentation. The drawings increased the communicative powers of archaeological interpretation by enabling the information to be disseminated in a 3D environment alongside other formats of data that would have otherwise been disconnected in 2D space. Finally, the database attributes permitted the drawings complete integration within the geodatabase, thereby making them available for query and other analytical procedures. Archaeological information is three-dimensional; therefore, archaeologists must begin to approach documentation bearing this in mind. This technique has demonstrated that 3D models are a fluidic form of documentation allowing for accurate preservation of archaeology while enabling new forms of data to be derived all within a limited amount of time. Archaeologists must begin to affect change towards embracing 3D models and their associated applications as a standard tool within the excavator’s toolbox.
During the past decade, the implementation of 3D visualization and Geographic Information System (GIS) in archaeological research has increased and is now well established. However, the combination of these two factors remains rather... more
During the past decade, the implementation of 3D visualization and Geographic Information System (GIS) in archaeological research has increased and is now well established. However, the combination of these two factors remains rather complicated when faced with archaeological data. Some of the characteristics of this discipline impose the development of applications that will be able to cope with all of the specificities of archaeological data. Our research aims to create an Archaeological Information System (AIS) that will gather all of the characteristics of an archaeological work. In order to develop such an AIS, our first step was to identify its purposes and consequently, the features that should be available to the users. As it is destined to help with archaeological research, it is therefore of the outmost importance that the particularities of such a study are also taken into account. Moreover, the AIS is intended to incorporate point clouds that serve as a base for the three-dimensional model. These 3D point clouds result from the use of photogrammetry and/or lasergrammetry and, at a later stage, will be inserted into a GIS similar structure. The archaeological data will then be linked to the relevant section of the 3D model. However, these various stages and during the development of the AIS itself, we will encounter a series of issues that require to be addressed in order to produce a working system. This paper aims to identify and define the AIS characteristics as well as the issues and obstacles that we are going to face so that this system becomes a functional tool for archaeological research.
We would like to invite you to join this exciting new project as a chapter contributor on one of the topics listed below. Since this is a textbook, a great deal of this chapter entails a survey on the topic under the paradigm of... more
We would like to invite you to join this exciting new project as a chapter contributor on one of the topics listed below. Since this is a textbook, a great deal of this chapter entails a survey on the topic under the paradigm of cyber-physical systems, what can be done onboard and remotely, the distributed nature of the system and some exercises on futurology (anticipating trends can shed some light on upcoming designs). IET will bring great visibility to your work. Each chapter should be around 20-25 pages each and can be submitted as a Word or Latex File. The IET will send you additional info (formatting, permission form, etc.) with the contributor's agreement once you have decided to contribute to the book. Visit http://www.theiet.org/resources/author-hub/books/index.cfm to get all contributor's information to an IET research-level book. Each book is expected to have a total number of 500 printed pages (with approximately 550 words per page and a 20% allowance for figures and tables). We have included a tentative schedule and list of topics below. If this is something you would consider, please send us the title of your chapter, a short description/abstract of the chapter content, and your full contact details. We will expect original content and new insights for this book. You can, of course, reuse published material but the percentage of material reuse for the chapter should be less than 40%. The IET will run a piracy software on the full manuscript to control that you are including original material and will reject chapters who contain a large amount of already-published material so please do take this into consideration. We would appreciate your feedback by December 31, 2017. Please do not hesitate to contact us if you have any queries. We look forward to working with you towards the successful publication.
IET indexes its books and journal in SCOPUS and IEEE Xplore. Computer Vision (CV) and Sensors play a decisive role in the operation of Unmanned Aerial Vehicle (UAV), but there exists a void when it comes to analysing the extent of their... more
IET indexes its books and journal in SCOPUS and IEEE Xplore.
Computer Vision (CV) and Sensors play a decisive role in the operation of Unmanned Aerial Vehicle (UAV), but there exists a void when it comes to analysing the extent of their impact on the entire UAV system. In general, the fact that a UAV is a Cyber-Physical System (CPS) is not taken into account. In this proposal, we propose to expand on earlier books covering the use of CV and sensing in UAVs. Among other things, an entirely autonomous UAV can help to (i) obtain information about the environment, (ii) work for an extended period of time without human interference, (iii) move either all or part of itself all over its operating location devoid of human help and (iv) stay away from dangerous situations for people and their possessions. A Vision System (VS) entails the way CV data will be utilized, the appropriate architecture for total avionics integration, the control interfaces, and the UAV operation. Since the VS core is its sensors and cameras, multi-sensor fusion, navigation, hazard detection, and ground correlation in real time are important operational aspects that can benefit from CV knowledge and technology. This book will aim to collect and shed some light on the existing information on CV software and hardware for UAVs as well as pinpoint aspects that need additional thinking. It will list standards and a set of prerequisites (or lack of them thereof) when it comes to CV deployment in UAVs. The issue of data fusion takes a centre place when the book explores ways to deal with sensor data and images as well as their integration and display. The best practices to fuse image and sensor information to enhance UAV performance by means of CV can greatly improve all aspects of the corresponding CPS. The CPS viewpoint can improve the way UAVs interact with the Internet of Things (IoT), use cloud computing, meet communications requirements, implement hardware/software paradigms necessary to handle video streaming, incorporate satellite data, and combine CV with Virtual/Augmented Realities.
VOLUME 2-DEPLOYMENT AND APPLICATIONS: This tome introduces procedures, standards, and prerequisites for the deployment of Computer Vision (CV) in UAVs from their application point of view. It discusses existing/desirable open source software tools, image banks, benchmarks, Quality of Experience (QoE), Quality of Service (QoS) and how CV can benefit from a Robot Operating System (ROS) in surveillance, remote sensing, inspection, maintenance and repair among other usages, while offering an assessment of current bottlenecks and trends. It will pave the road towards better studies on the necessity and viability of implementing collaborative environments for visualization, knowledge management and teleoperation of UAVs. This is planned to be the companion volume of Estrela, Hemanth, Saotome (Eds) / Imaging and Sensing for Unmanned Aerial Vehicles: Volume 1-Control and Performance.
Editor(s):
Dr. Vania V. Estrela, https://www.linkedin.com/in/vania-v-estrela-96b9bb29/
Universidade Federal Fluminense (UFF), RJ, Brazil
[email protected]
Dr. Jude Hemanth,https://www.karunya.edu/ece/drjude.html
Karunya University, Coimbatore, India
[email protected]
Dr. Osamu Saotome, https://www.linkedin.com/in/osamu-saotome-83935818
Instituto Tecnológico de Aeronáutica, CTA-ITA-IEEA, São José dos Campos, SP, Brazil
[email protected]
CONTENTS:
1. Image Acquisition and Restoration in UAVs
2. Image Fusion in UAVs
3. Super-Resolution Imaging in UAVs
4. 2D/3D/4D Imaging in UAVs
5. Multi-view Image and ToF Sensor Fusion in UAVs
6. Range Imaging in UAVs
7. Multispectral and Hyperspectral Imaging in UAVs
8. Imaging Standards and UAVs
9. Virtual/Augmented Reality in UAVs
10. Collaborative Environments in UAVs
11. Archiving, Storage, and Compression in UAVs
12. Analysis, Indexing, Retrieval in UAVs
13. Multicast/Broadcast/Streaming in UAVs
14. Modelling, Simulation and UAVs
15. Image-Oriented Estimation and Identification in UAVs
16. Open Source Software in UAVs
17. Image Banks and Benchmarks in UAVs
18. Quality of Experience (QoE) and Quality of Service (QoS) in UAVs
19. Robot Operating System (ROS) in UAVs
20. Cloud Computing in UAVs
Specification and Schedule
July 1st, 2017: Call for Chapter Abstracts
September 1, 2017: One-Page Chapter Abstract (up to 1000 words) Submission Deadline. Free style.
A proposal must outline one of topics fromthe list above (mention its number, for instance 1. and reference PBCE120B).
November 30, 2017: Last Day for Notification of Acceptance
Jan 30, 2018: Full Chapter Submissions
March 30 2018: Review Chapter Submissions and send comments to authors
May 31, 2018: Receive revised Chapter Submissions
June 30, 2018: Notification of Final Acceptance
July 31, 2018: Gather all material, figure files and copyrights permission forms
Aug 30, 2018: Book editors to finalize introduction and conclusion chapters
Sept 15th, 2018, Delivery of full manuscript to the IET
Scheduled publication: Feb/March 2019
Readership: Graduate students and Researchers in the fields of Electrical and Computer Engineering, Computer Science, Mechanical Engineering, Civil Engineering, Humanitarian Engineering, Control Systems, Geoscience and Remote Sensing, Instrumentation and Measurement, Intelligent Transportation Systems, Oceanic Engineering, Safety Engineering, Reliability, Robotics and Automation, Signal Processing, Technology and Engineering Management, Environmental Engineering, Public Health Management, Non-Invasive Testing/Monitoring and Vehicular Technology.
Additional Information: Dr. Vania V. Estrela, [email protected]
Dr. Jude Hemanth, [email protected]
Computer Vision (CV) and Sensors play a decisive role in the operation of Unmanned Aerial Vehicle (UAV), but there exists a void when it comes to analysing the extent of their impact on the entire UAV system. In general, the fact that a UAV is a Cyber-Physical System (CPS) is not taken into account. In this proposal, we propose to expand on earlier books covering the use of CV and sensing in UAVs. Among other things, an entirely autonomous UAV can help to (i) obtain information about the environment, (ii) work for an extended period of time without human interference, (iii) move either all or part of itself all over its operating location devoid of human help and (iv) stay away from dangerous situations for people and their possessions. A Vision System (VS) entails the way CV data will be utilized, the appropriate architecture for total avionics integration, the control interfaces, and the UAV operation. Since the VS core is its sensors and cameras, multi-sensor fusion, navigation, hazard detection, and ground correlation in real time are important operational aspects that can benefit from CV knowledge and technology. This book will aim to collect and shed some light on the existing information on CV software and hardware for UAVs as well as pinpoint aspects that need additional thinking. It will list standards and a set of prerequisites (or lack of them thereof) when it comes to CV deployment in UAVs. The issue of data fusion takes a centre place when the book explores ways to deal with sensor data and images as well as their integration and display. The best practices to fuse image and sensor information to enhance UAV performance by means of CV can greatly improve all aspects of the corresponding CPS. The CPS viewpoint can improve the way UAVs interact with the Internet of Things (IoT), use cloud computing, meet communications requirements, implement hardware/software paradigms necessary to handle video streaming, incorporate satellite data, and combine CV with Virtual/Augmented Realities.
VOLUME 2-DEPLOYMENT AND APPLICATIONS: This tome introduces procedures, standards, and prerequisites for the deployment of Computer Vision (CV) in UAVs from their application point of view. It discusses existing/desirable open source software tools, image banks, benchmarks, Quality of Experience (QoE), Quality of Service (QoS) and how CV can benefit from a Robot Operating System (ROS) in surveillance, remote sensing, inspection, maintenance and repair among other usages, while offering an assessment of current bottlenecks and trends. It will pave the road towards better studies on the necessity and viability of implementing collaborative environments for visualization, knowledge management and teleoperation of UAVs. This is planned to be the companion volume of Estrela, Hemanth, Saotome (Eds) / Imaging and Sensing for Unmanned Aerial Vehicles: Volume 1-Control and Performance.
Editor(s):
Dr. Vania V. Estrela, https://www.linkedin.com/in/vania-v-estrela-96b9bb29/
Universidade Federal Fluminense (UFF), RJ, Brazil
[email protected]
Dr. Jude Hemanth,https://www.karunya.edu/ece/drjude.html
Karunya University, Coimbatore, India
[email protected]
Dr. Osamu Saotome, https://www.linkedin.com/in/osamu-saotome-83935818
Instituto Tecnológico de Aeronáutica, CTA-ITA-IEEA, São José dos Campos, SP, Brazil
[email protected]
CONTENTS:
1. Image Acquisition and Restoration in UAVs
2. Image Fusion in UAVs
3. Super-Resolution Imaging in UAVs
4. 2D/3D/4D Imaging in UAVs
5. Multi-view Image and ToF Sensor Fusion in UAVs
6. Range Imaging in UAVs
7. Multispectral and Hyperspectral Imaging in UAVs
8. Imaging Standards and UAVs
9. Virtual/Augmented Reality in UAVs
10. Collaborative Environments in UAVs
11. Archiving, Storage, and Compression in UAVs
12. Analysis, Indexing, Retrieval in UAVs
13. Multicast/Broadcast/Streaming in UAVs
14. Modelling, Simulation and UAVs
15. Image-Oriented Estimation and Identification in UAVs
16. Open Source Software in UAVs
17. Image Banks and Benchmarks in UAVs
18. Quality of Experience (QoE) and Quality of Service (QoS) in UAVs
19. Robot Operating System (ROS) in UAVs
20. Cloud Computing in UAVs
Specification and Schedule
July 1st, 2017: Call for Chapter Abstracts
September 1, 2017: One-Page Chapter Abstract (up to 1000 words) Submission Deadline. Free style.
A proposal must outline one of topics fromthe list above (mention its number, for instance 1. and reference PBCE120B).
November 30, 2017: Last Day for Notification of Acceptance
Jan 30, 2018: Full Chapter Submissions
March 30 2018: Review Chapter Submissions and send comments to authors
May 31, 2018: Receive revised Chapter Submissions
June 30, 2018: Notification of Final Acceptance
July 31, 2018: Gather all material, figure files and copyrights permission forms
Aug 30, 2018: Book editors to finalize introduction and conclusion chapters
Sept 15th, 2018, Delivery of full manuscript to the IET
Scheduled publication: Feb/March 2019
Readership: Graduate students and Researchers in the fields of Electrical and Computer Engineering, Computer Science, Mechanical Engineering, Civil Engineering, Humanitarian Engineering, Control Systems, Geoscience and Remote Sensing, Instrumentation and Measurement, Intelligent Transportation Systems, Oceanic Engineering, Safety Engineering, Reliability, Robotics and Automation, Signal Processing, Technology and Engineering Management, Environmental Engineering, Public Health Management, Non-Invasive Testing/Monitoring and Vehicular Technology.
Additional Information: Dr. Vania V. Estrela, [email protected]
Dr. Jude Hemanth, [email protected]
Related Topics