Summary form only given. The extremely fast grow of Internet Services, Web and Mobile Applications and advance of the related Pervasive, Ubiquity and Cloud Computing concepts have stimulated production of tremendous amounts of data partially available online (call metadata, texts, emails, social media updates, photos, videos, location, etc.). Even with the power of today's modern computers it still big challenge for business and government organizations to manage, search, analyze, and visualize this vast amount of data as information. Data-Intensive computing which is intended to address this problems become quite intense during the last few years yielding strong results. Data intensive computing framework is a complex system which includes hardware, software, communications, and Distributed File System (DFS) architecture. Just small part of this huge amount is structured (Databases, XML, logs) or semi-structured (web pages, email), over 90% of this information is unstructured, what means data does not have predefined structure and model. Generally, unstructured data is useless unless applying data mining and analysis techniques. At the same time, just in case if you can process and understand your data, this data worth anything, otherwise it becomes useless. Two key components of any Data-intensive system are: Data Storage and Data Processing. So, which technologies, techniques, platforms, tools are best for Big Data storing and processing? How Big Data Era effect technological landscape? These and many other questions will be answered during a speech.
Did you like this research project?
To get this research project Guidelines, Training and Code...
Click Here