The Internet of Things (IoT) is increasingly part of our everyday lives, with so-called “smart” devices. But for all their undoubted technical merits, they also represent a growing threat to privacy. There are several aspects to the problem. One is that devices may be monitoring what people say and do directly. Another is the leakage of sensitive information from the data streams of IoT devices. Finally, there is the problem summed up by what is called by some “Hyppönen’s law“: “Whenever an appliance is described as being ‘smart’, it’s vulnerable”.
The open science revolution can be said to have begun with open access—the idea that academic papers should be freely available as digital documents. It takes the original idea to the next level, by making that information freely accessible to all. The internet can potentially give everyone with an online connection cost-free access to every article posted online. The same can be said of another important aspect of open science: open data. Before the internet, handling data was a tedious and time-consuming process. But once digitized, even the most capacious databases can be transmitted, combined, compared and analyzed very rapidly.
People working in science potentially can benefit from every piece of free software code—the operating systems and apps, and the tools and libraries—so the better those become, the more useful they are for scientists. But there's one open-source project in particular that already has had a significant impact on how scientists work—Project Jupyter. Project Jupyter is a set of open-source software projects that form the building blocks for interactive and exploratory computing that is reproducible and multi-language.
Artificial intelligence is widely viewed as likely to usher in the next big step-change in computing, but a recent interesting development in the field has particular implications for open source. It concerns the rise of "ethical" AI. It's long been accepted that the creators of open-source projects cannot stop their code from being used for purposes with which they may not agree or even strongly condemn—that's why it's called free software. What exactly does the rise of "ethical" AI imply for the Open Source world, and how should the community respond?