CRF organised a hackathon in Campus Melfi (Italy) from 18th to 19th June 2019 in order to share knowledge on the automated data processing for manufacturing and to start defining a business case for the big data analysis across different processes and plants. The aim of the hackathon was on one side to develop methodologies and tools for the predictive analysis of massive manufacturing data, showing heterogeneous real-time acquisition, massive industrial data management and intelligent decision-making, and on the other side to define the metrics, objectives, AS-IS and TO-BE situations for the implementation of big data analysis in automotive industrial processes.

In the CRF’s Hackathon students, PhDs, researchers and employees from SMEs were asked to develop an innovative solution for the analysis, visualisation and architecture of complex data collected from a die casting process. The dataset was previously processed by the CRF which structured and anonymised the large volume of casting data.

CRF experts explained to the participants the relevant information concerning the die casting process, from which the dataset provided was extracted, illustrating the different parameters for the analysis. Six teams were formed after interviewing the participants in order to understand their skills on Analysis, Visualization and Architecture / Hardware, in order to create heterogeneous groups able to perform all the different actions required.

T o motivate the teams, improve communication and collaboration, team building games were organised, during which the individual participants were involved in role-playing games. Once the collaborative spirit was created, the groups were accommodated in different work areas. At the same time ,the I-BiDaaS project partners were able to analyse the case study, even if out of cthe ompetition.
The teams worked by using different software (Minitab, Excel, R) and methods: Neural Networks (Multi Layer Perceptron MLP), Decision Trees, Random Forest and Generalized Additive Model (GAM) for Big Data.

Each group was able to find valuable results using also different techniques than the ones used in the analysis performed by CRF previously.

In particular, this is the summary from a technical point of view:

Benchmark: With the same data set, the I-BiDaaS team achieved 80% classification accuracy, while best reported by other teams was 73-74%.
Validation: Other teams found similar features as the I-BiDaaS team did, while the modelling approach was different. This suggests that the project approach could be cross-validated by the others.
Casting process phases: Other teams took into account the three phases of the casting process while the project team did not.
• Data cleaning: Other teams pointed to a problem in Result_2 data, i.e., there was class label 4, while we did not take this into account.
Break silos: I-BiDaaS got access to real anonymized data and engine casting process, the data is at least order of magnitude larger than the data previously available to the consortium
Data fabrication: New approaches for fabricating data *with labels* were brainstormed and delivered during the hackathon.

In synthesis, as a result of the hackathon, we produced a new random forest model and a new neural network model (classifier) for the CRF foundry use case. As mentioned before, our random forest model had 80% accuracy versus 74% accuracy achieved by other hackathon teams.

CRF is exploiting the results obtained during the hackathon to address the opportunities and challenges of industrial big data analytics produced by diverse sources in manufacturing spaces, such as sensors, devices and humans. The project and hackathon results will be evaluated based on their effective capability to increase the production efficiency and to reduce energy consumption and cost of production.