Loading

Building Block Identification from Deep Neural Network Codes for Deep Learning Modeling Support
Ki Sun Park1, Kyoung Soon Hwang2, Keon Myung Lee3

1Ki Sun Park, Department of Computer Science, Chungbuk National University, Cheongju, Korea.

2Kyoung Soon Hwang, Department of Computer Science, Chungbuk National University, Cheongju, Korea.

3Keon Myung Lee, Department of Computer Science, Chungbuk National University, Cheongju, Korea.

Manuscript received on 01 January 2019 | Revised Manuscript received on 06 January 2019 | Manuscript Published on 07 April 2019 | PP: 370-375 | Volume-8 Issue- 3C January 2019 | Retrieval Number: C10800183C19/2019©BEIESP

Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Recent successful applicationsof deep learning techniques have been attracting many attentions of developers in various domains. There are many publically available codes of deep learning models to which developers refer. This paper introduces a method to identify building blocks of deep neural network models for reuse and visualization in model development. Methods/Statistical analysis: Deep learning models have a layered architecture the number of which layers can be tens to hundreds. Developers design their own deep learning model by composing it with layers. GUI-based modeling tool can be very useful to design such model with which a new model is created or existing deep learning model codes are imported and modified. To reuse useful building blocks in existing models and visualize models in abstract and hierarchical view, a building block identification method is proposed which uses a graph structure analysis technique and a frequent subsequence mining technique. It first transforms a deep neural network model into a graph structure, then identify macro-blocks, and after that mines the frequent consecutive subsequences. Findings:The macro-blocks in deep neural networks are detected by the proposed algorithm which isolates subgraphs that starts with a node with a single fan-in, ends with a node with fan-out 1, and has nodes with more than one fan-in or fan-out between the start and the end nodes. The frequent consecutive subsequences of nodes are recognized as candidate building blocks for deep learning model construction. Building blocks may contain smaller building blocks so that a hierarchical and abstract entailment can be implemented in the visual representation of deep learning models with many layers. As the building blocks, the macro-blocks and the frequent consecutive blocks are chosen and registered in the GUI-based deep learning modeling tool. With the GUI-based model tool, developers can easily design deep learning models using the building blocks and can visualize conveniently the deep learning models with many layers. Improvements/Applications: The proposed building block identification method helps developers build deep learning models by providing useful building blocks identified from existing deep learning models and enables to visualize deep learning models with many layers by automatically organizing the hierarchies on such models.

Keywords: Deep Learning Modeling, Code Reuse, Model Visualization, Frequent Pattern Mining.
Scope of the Article: Deep Learning