Loading

Analytical Design of the DIS Architecture: The Hybrid Model
Mahesh S Nayak1, M. Hanumanthappa2, B R Prakash3, Dattasmita H V4

1Mahesh S Nayak*, Research scholar, Bharathiar University, Coimbatore.
2Dr. M. Hanumanthappa, Professor, Department of Computer Science & Applications, Bangalore University, Bangalore.
3Dr. B R Prakash, Assistant Professor, Department of Computer Science, Government First Grade College, Tiptur, Karnataka.
4Dattasmita H V, ICT Manager. Tumakuru Smart City Limited, Tumakuru, Karnataka, India.
Manuscript received on February 10, 2020. | Revised Manuscript received on February 23, 2020. | Manuscript published on March 10, 2020. | PP: 1032-1036 | Volume-9 Issue-5, March 2020. | Retrieval Number: D1454029420/2020©BEIESP | DOI: 10.35940/ijitee.D1454.039520
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: In the last decades, and due to emergence of Internet appliance, there is a strategical increase in the usage of data which had a high impact on the storage and mining technologies. It is also observed that the scientific/research field’s produces the zig-zag structure of data viz., structured, semi-structured, and unstructured data. Comparably, processing of such data is relatively increased due to rugged requirements. There are sustainable technologies to address the challenges and to expedite scalable services via effective physical infrastructure (in terms of mining), smart networking solutions, and useful software approaches. Indeed, the Cloud computing aims at data-intensive computing, by facilitating scalable processing of huge data. But still, the problem remains unaddressed with reference to huge data and conversely the data is growing exponentially faster. At this juncture, the recommendable algorithm is, the well-known model i.e., MapReduce, to compress the huge and voluminous data. Conceptualization of any problem with the current model is, less fault-tolerant and reliability, which may be surmounted by Hadoop architecture. On Contrary case, Hadoop is fault tolerant, and has the high throughput which is recommendable for applications having huge volume of data sets, file system requiring the streaming access. The paper examines and unravels, what efficient architectural/design changes are necessary to bring the benefits of the Everest model, HBase algorithm, and the existing MR algorithms. 
Keywords:  MapReduce, HPC, HBase, Hadoop, HDFS, Cloud Computing, Google MapReduce, HPQL, Cap3, ADT.
Scope of the Article: Probabilistic Models and Methods