Demand for Data Acquisition (DAQ) solutions continues to cut across different segments and markets, with suppliers being urged to leverage a larger margin and market share. The market is now focusing on moving into niche markets through the use of marketing initiatives. New applications in the level and flow sensors, and pressure domain to collect data in the medical infrastructure and power and energy domain are currently fueling the industry’s growth.
Future opportunities in data acquisition lie in wireless data acquisition systems with an emphasis on modular DAQ systems and paperless chart recorders. There is also an increased demand for data loggers and smart sensors in the market, with the market moving towards VXI, PXI, and PCI interfaces whilst smart grid sensors technology continues to be touted as the future of DAQ technology as well. Below, we look at 4 data acquisition trends to watch out for in 2017 as follows:
- Apache Spark trumps MapReduce
As far as memory data processing goes, Apache Spark came on to the scene in late 2014 as a top-notch Apache Project and was subsequently the dominant buzz word for much of 2015 and 2016, witnessing significant early adoption. Expect to see an explosion of its adoption by fast organizations and followers seeking to replace outdated data management platforms. Spark on Hadoop YARN is likely to dominate the conversation and will, to a large extent, do away with the need for MapReduce processing.
- IoT Matures
Back in 1999, the phrase ‘internet of things’ (IoT) was coined by Kevin Ashton and the world has continued to see interesting advances in the use of interconnected devices and sensors. The IoT phenomenon has rapidly gathered steam in recent years with companies such as Ericsson, Cisco Systems, and GE making immense contributions.
Expect to see the embracing of open standards designed to improve data acquisition and analysis, device monitoring, and information sharing in 2017. We should also witness a divergence on the issues surrounding the data that is collected by these devices with consumer-driven, personal data guaranteed to increase privacy and security complexities. Enterprise-driven data should increase the complexities surrounding issues such as usage patterns, storage architectures, and knowledge sharing.
- ‘Unstructured’ content analysis to become routine
The analysis of emojis, spam, images, video, audio, free text and other forms of non-tabular data has been a specialty area within the data science field for some years now. The explosion of free content and libraries such as doc2vec (in DL4J) and word2vec as well as the convergence of more accessible semantic analysis techniques has led to more mainstream uses of text mining techniques. For instance, Carnegie Mellon researchers recently open-sourced their OpenFace project which they claim recognizes faces in real time from only 10 reference photos. This is just an example of the maturation libraries, tools, and techniques that enable non-tabular data analysis and whose widespread use we should expect more of in 2017, inevitably accompanied by the attendant privacy and security debates.
- Data encryption, privacy, and security
The ongoing fight against cybercrime should continue to escalate in 2017 as hacktivists and cybercriminals continue to become more sophisticated. Corporations are becoming increasingly concerned about the unauthorized access to sensitive data, recovery costs, and the reputational damage occasioned. Likewise, consumers are increasingly growing aware of the value of their personal data and that its privacy is at risk. Meanwhile, users of technology continue to be more hyper-connected than ever before thus increasing the vulnerability of data. These and other factors mean that advanced data strategies should continue to be a high priority for IT corporations across the globe in 2017, according to Chris Lange, a prominent data logger.