In order to validate one of its main results, the DataBench Toolbox, the project has launched two validation campaigns aiming at gathering feedback and recommendations from people external to the project but interested on using the Toolbox and the different features it offers.

The DataBench Toolbox is a one-stop-shop for benchmarking Big Data and AI technologies, offering different types of users’ multiple benefits to evaluate and compare technical results while also providing relevant information on how these can impact and derive business KPIs.

The validation campaigns are:

1. Generation of architectural Pipelines-Blueprints

Starts on: 22/11/2020
Ends on: 13/12/2020
Estimated Test Duration: 30 minutes plus mapping to blueprints that requires desk analysis
Target beta testers profile: Developers
Beta tester level: Advanced

This campaign aims at getting content in the form of new architectural big data/AI blueprints mapped to the BDV reference model and the DataBench pipeline/blueprint. Testers should study the available DataBench information and guidelines. Then using the provided steps testers should prepare their own mappings, resulting diagrams and explanations, if any. The Toolbox provides a web form interface to upload all relevant materials that will be later assessed by an editorial board in DataBench before the final publication in the Toolbox.

Requirements for this campaign:
– Having a big data/AI architecture in place in your project/organization
– Willingness to provide mappings from your architecture to be part of the DataBench pipeline/blueprints
– Basic Knowledge of web browsing
– Internet connection
– Use preferably Google Chrome

All the instructions for this validation campaign are available here. After following the instructions, please fill in the feedback questionnaire.

2. Finding the right benchmarks for technical and business users

Starts on: 22/11/2020
Ends on: 08/12/2020
Estimated Test Duration: 30 to 40 minutes
Target beta testers profile: Business users, Developers
Beta tester level: Intermediate

This campaign aims at getting feedback of the usage of the Tool and the user interface of the web front-end of the Toolbox. The idea is to use the user journeys drafted in the Toolbox to drive this search process and understand if users find this information enough to kick-start the process of finding the right benchmark and knowledge they were looking for.

Requirements for this campaign

– Previous knowledge about Big Data or AI
– Basic Knowledge of web browsing
– Internet connection
– Use preferably Google Chrome

All the instructions for this validation campaign are available here. After following the instructions, please fill in the feedback questionnaire.

For any other inquiry, please write an email.

The two validation campaigns offer different rewards and incentives to participants as a recognition to the effort and useful feedback such as: add your name as a contributor on the DataBench website, publish your blueprint, and acknowledgement of the contributions. Also, the first 16 Beta Testers will be offered to be added to the ReachOut Hall of fame, will be awarded a money prize in recognition, and will take part in the ReachOut Lottery.

These campaigns have been launched using ReachOut Project, a support action funded by the EC to provide EU-funded projects in the area of software technologies with end-to-end support to develop and launch testing campaigns so as to enable them to concretely engage with their potential users and markets and develop their ecosystems.