Internet of things and how they own or sell our digital souls

Technology is a big part of our lives. Since the beginning of the internet era we do a lot of things online; we can date, buy, play, work, communicate, coordinate, explore, represent who we are and more. Everything we do online leaves a data trace, and unlike humans this is immortal. In this time in history we have to accept that technology makes our lives easier and at the same time it stores information about us. So after death, if one has been digitally active, a version of one’s soul will remain as data. And this data is capital, valuable and lucrative, because it helps companies identifying which profile we are and thus what to sell to profiles similar to us. This has lead to a world where traditional advertising is obsolete, and research and data collection is the next best thing. Within this context the Internet of Things arises.

Computer and mobile devices are not the only objects that provide access to the online world, now a watch, an entire home, or a car will do so too. This means four things 1) intelligent objects assure that even more data will be produced — even when our laptop or mobiles devices are off. 2) Companies producing intelligent objects will own more data from users 3) More data will be traded among organisations for different purposes: mere lucrative purposes, development, research — among other possible options and lastly  4) new generations born in this time will be tracked from day one, and their data will be usable during and after their death.

There is a paradox in this time in history, It seems like the internet has given us power to reach everything at anytime. Childhood friends will never disappear from our lives thanks to Facebook. Friends living in opposite sides of the world can chat daily through Skype, or WhatsApp for free. Dead relatives can keep receiving comments and posts from us in their afterlife. It is a serious issue how most people do not stop and think that in exchange of free services we agree to give away every photo, conversation and personal information shared on this platforms, even the information shared on a private message — which in nature is everything but private. Our daily life conversations are owned by social media companies. And the regulations around them are still not clear or good enough.

We live in a spiral, where we reject privacy for commodity. The possibility to change some privacy settings may seem like we are in control. Because we feel good when Facebook reminds us to set our privacy settings. Specially in regards of what other users can see, or how we are perceived. But surely Facebook does not remind us the amount of data collected to date that they have from us. How would it be to receive a message from Facebook saying something like:  “Today we own 5000 private conversations from you since you opened your account with us, and we will keep them for ever”.

It seems like privacy is a complex issue, but, people still close the door when they go to the toilet, or keep certain aspects to their life private to the public, – no? Privacy is a civil right, which in the digital age is constantly challenged and forgotten by companies, government and individuals. In relation to it, smart homes or wearable gadgets are both a commodity and a threat to privacy. There is a great amount of interference, recurring to the article 12 of the Universal Declaration of Human Rights:

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

Against interference, or unethical actions the law must be updated to this present concerns, privacy is a civil right that must be preserved, especially in the digital world of Internet of Things, where these objects are becoming inevitably pervasive: they can be everywhere. There is no doubt that Intelligent objects will be incorporated gradually in cities, institutions and homes. People do wear intelligent watches, we know of  hotels in the mediterranean coast which are starting to implement  intelligent light systems that can be controlled and hacked. This objects serve to monitor what surrounds us and at the same time companies, through these Intelligent gadgets can monitors us..

Consequently, Industry has to produce intelligent objects respecting civil rights and embracing Ethical Design oriented processes, where the privacy challenges are considered specifically in the design, production and distribution phases of this products. Moreover, users have to be informed about the repercussions attached to monitoring and producing data through intelligent objects. People need to know that their privacy will be challenged after buying intelligent objects, it’s packaging and manual should express clearly — and with a proper size font, how the data produced will be managed. If you buy an intelligent watch, you need to know if the company building this watch will monitor and sell your personal data from that day on. It is a civil right for users to know if these objects respect one’s privacy rights. Unfortunately like in the case of smoking, just because the package tells you it can kill it doesn’t mean you won’t stop buying it in the end, that’s is way this is also a concern for individuals.




In the past years privacy issues had become less important. Most users with a computer may not care of them being recorded 24 hours a day by the tiny and insignificant web camera on top of their screens. It is very naive to think that gadgets as such are not dangerous. And it is not only about privacy – being naked or not in front of your computer, it is also a matter of money. Imagine that a company has access to all the data generated by your web camera. Imagine that they could sell to another company that may want research on eye tracking. This situation is hypothetical, but unfortunately not far away from what we can encounter in the following years with the upcoming trend of intelligent objects.

A group of researchers from the Universita’ Cattolica de Millan,  in collaboration with the European Committee are bringing up this challenge and defining the different aspects that users must be aware of to be free of switching off the control from big corporations. Specially this reality is valuable to those producing and designing the objects of tomorrow. For that matter they are introducing the concept of Ethical Design, which aims to educate and engage everybody in this matter so that the challenges of surveillance and money making out of user´s data is controlled.

In their past publication they not only introduced the term but also bring up their latest research in terms of tools that users will be able to use to cope with this war against selling user’s privacy in the market. The tool, Seckit helps users interact with intelligent objects. Nevertheless a tool is not enough, strong laws should be created and demanded. And users must start worrying or positively engaging with this matter before their data is being sold without their consent.

For more information regarding the project, images or diagrams explaining the tool and the details of the findings by the research please contact by email, from IT University.

Sources: Ethical Design in the Internet of Things, Baldini et al, European Commission, GNKS Consult



This text is a critique to the paper Ethical Design in the Internet of Things by Gianmarco Baldini, Maarten Botterman, Ricardo Neisse and Mariachiara Tallacchini published in 2016. The paper, open to everyone, covers privacy issues related to IoT (Internet of Things). Firstly the authors explain the concept of IoT and its perceived challenges, later they introduce the idea of Ethical Design and lastly they introduce Seckit as a tool for users and to approach regulatory purposes.


Opaque reflections on the process

At the beginning the authors provide two definitions of IoT (p. 2) but, why do they pick these ones? A guidance through their process could have help the reader understand why the authors write about Ethical Design in the first place and why do they pick this specific definitions? An example were the intentions of the writer remain unclear is when they express ‘the pervasiveness of Iot’. If the paper addresses everyone, why would they use this word instead of ‘widespread’ is there a negative connotation of the word?

Education to the users/citizens

The text explains very well the different aspects around IoT: new regulations (p. 3), the idea of context (p.5). Though, they do not focus on the education processes that can help users/ citizens today and how this ones relate to IoT. Its seems that this idea appears spread in different sections of the paper, for example, when the authors carefully illustrate on page 7, aspects related to the nature of the users and their capacities and how more qualified users are able to better control their privacy than others. Thus, interesting concepts arise: Incomplete information on the ‘consequence of data disclosure’, ‘Psychological biases’, ‘Accountability’, ‘on-line and off-line identity’ and ‘digital divide’. All these could conform a section focused on the user and education.

Reader considerations

As a reader, the first part of the paper becomes easy to read, the contextualisation bring strong arguments on why one should read it. Reflecting on regulations around privacy or referencing to surveillance mark the relevance of the subject. Nevertheless the lack of reflections or explanations behinds the author´s wishes or goals with this paper create certain distrust in the proposed tool at the end of the paper. It may seem that the paper is made to argument the tool, rather than the tool becoming a part of the possible approaches to the problem.