Loading

Big Data Framework for Storage Extraction and Identification of Data using Hadoop Distributed File System
B. Suvarnamukhi1, M. Seshashayee2

1B. Suvarnamukhi, Assistant Professor, Department of CSE, St Mary’s Group of Institutions India. 

2M. Seshashayee, Department of Computer Science, GITAM Deemed to be University, Visakhapatnam (Andhra Pradesh), India.

Manuscript received on 24 November 2019 | Revised Manuscript received on 12 December 2019 | Manuscript Published on 30 December 2019 | PP: 392-394 | Volume-9 Issue-2S3 December 2019 | Retrieval Number: B10021292S319/2019©BEIESP | DOI: 10.35940/ijitee.B1002.1292S319

Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Big data is all about the developing challenge that associations face in today’s world, As they manage enormous and quickly developing wellsprings of information or data, with the complex range of analysis and the problem includes computing infrastructure, accessing mixed data both structured and unstructured data from various sources such as networking, Recording and stored images. Hadoop is the open source software framework includes no of compartments that are specifically designed for solving large-scale distributed data storage. MapReduce is a parallel programming design for processing.

Keywords: Big Data, Hadoop, MapReduce, Parallel Programming.
Scope of the Article: Big Data Quality Validation