Around 200 billion tweets are sent per day. Google receives over 200 million search requests per minute. The UK’s Department of Health plan to sequence 100,000 genomes, each of which generates 200 GB for data. Walmart’s database contains over 2.5 petabytes of data from the retailer’s 1 million customer transactions per hour. Who will analyse all this data? Being able to quickly and efficiently analyse large amounts of electronic data is becoming increasingly important for a wide range of org anisations. Huge amounts of data are currently available and the volume being produced is growing rapidly.Data is generated from a wide range of sources including medicine, use of social media, scientific experiments and sensor networks. This data exists in a variety of formats ranging from structured (e.g. spreadsheets and sensor data) to unstructured (e.g. text, images, video and speech). Deriving information from this data has become one of the key challenges within Computer Science.
Machine Learning and Adaptive Intelligence (in Python);
Statistical Data Science in R;
Natural Language Processing;
Scalable Machine Learning;
Information Governance and Ethics;
Computer Security and Forensics;
Parallel Computing with Graphic Processing Units (GPUs);