The end is in sight. I can see the light at the end of the tunnel. I just completed a comprehensive final exam over the entire GIS program, have competed my 40-page “Masters Report”, and the only thing left is the oral presentation of the report coming up on Nov 16th. So, I am basically ALL FINISHED! WooHooo!
The comprehensive final was pretty rough. One of the hazards of stretching the program out over four years is you forget a lot of what you learned earlier on. It was a open-book take-home exam that you had 48 hours to complete. It definitely took the entire weekend to get it done. The worst part was some of the geostatistics problems. Statistics was never one of my strong suites to begin with.
My “Masters Report”, which is similar to a thesis, comprises an investigation into the deployment of geo-enabled sensors and how this technology will be implemented in more advanced applications. The initial focus is on sensor web services and real-time data, aka Sensor Web Enablement (SWE), which will enable the discovery, exchange, and processing of sensor observations, as well as the tasking of sensor systems through the World Wide Web. The ubiquitous spread of the Internet, low-cost wireless communications networks, and advanced sensor manufacturing technologies have opened up an entirely new realm of sensor capability. Sensors are suddenly all around us and are rapidly becoming location-aware. As these devices are connected to the Internet a vast amount of data is becoming available.
A new physicalâ€“digital landscape is emerging comprised of sensor networks, linking places and spaces to unprecedented amounts of information. As these tiny sensors and actuators gradually become embedded in our environment an opportunity will arise to make these systems operate with spatial intelligence. The resulting systems will become increasingly aware, able to make complex interactive decisions without human intervention. Not only will they programmatically respond to environmental changes, they will also anticipate our human context and tasks to create a safely usable flow of precisely relevant information.
As this scenario plays out we find ourselves facing a problem of our own devising â€“ how are we to communicate, coordinate, process, and react to this voluminous amount of sensory data? Clearly, as the number of sensors increases to thousands, millions and beyond, we must develop an architecture that provides standard methods to building, maintaining, and understanding sensor networks. This abundance of data must be managed, mined, or thrown out at the risk of losing something of value. A seeming consequence of information abundance is attention scarcity. How will we determine what to pay attention to and what gets pushed to the periphery?
One answer is to build intelligence into the system. Artificial Intelligence (AI) developers have made significant advances in the software realm and one possible answer to this problem is found with intelligent agents. The utilization of software agents and artificial intelligence (AI) to augment Sensor Web applications is examined.