Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


NITRD’s 30th Anniversary Symposium Recap – Panel 4: Privacy and the Internet of Things (IoT)

June 23rd, 2022 / in Announcements, CCC, NITRD, videos / by Catherine Gill

Last month the Networking and Information Technology Research and Development (NITRD) program commemorated their 30th Anniversary in Washington D.C. You can read the full event recap here. In an effort to highlight the impact federal investments have had on the computing research community, the event featured five panels in which participants discussed key achievements in the field over the past decade and future directions going forward. Each panel focused on an important subarea of computer research: Computing at Scale, Networking and Security, Artificial Intelligence/Machine Learning, Privacy and the Internet of Things and Socially Responsible Computing. 

 

Privacy has become a huge topic of conversation not only among the computing research community, but across all disciplines in both academia and industry. Adverse privacy effects stemming from the availability of large-scale datasets are being multiplied by the interconnected sensors, devices, and actuators that make up the Internet of Things (IoT). Moderated by Charles (“Chuck”) Romine (NIST) and featuring field experts Ed Felten (Princeton), Marc Groman (Groman Consulting), Katerina Megas (NIST), and Sunoo Park (Cornell), Panel 4: Privacy and IoT discusses important topics such as the tradeoffs between data use and privacy and potential research goals to help in achieving effective policy solutions. 

 

Romine started out by highlighting a common thread in all the panels: “speaking about both the benefits and the extraordinary capabilities delivered through federal funding investments, along with the associated risks.” IoT is no different, it brings people access to insurmountable information, enables successful ad campaigns and tailors’ technology to your personal taste, but it also jeopardizes user privacy.

 

As Megas pointed out, “The whole reason we’re undertaking this effort is because we want to be able to actually see IoT recognized and for society to reap the benefits.” She went on to share the potential benefits and the importance of being able to share data across IoT. There is a “phenomenal” scale of devices in IoT that can be used to identify problems across datasets, learn things that have high impact potential for individuals and society, training Artificial Intelligence technologies, and enabling small innovative companies to test their devices. Romine asked the panelists what the associated privacy risks actually are in this context of IoT and information sharing.

 

Groman answered by first explaining the interplay between privacy and IoT. The privacy side of the IoT is a subset of data within the larger set being collected, that is about or relating to people. Do people know that data is being collected about them? Is there an interface where you can interact with the device, learn what it is collecting or change it? Do people understand what information is being collected or what inferences are being made by the device or company from the data being collected? Due to the monetary incentive structure and the “vast” amount of many that companies stand to make out of capitalizing on such data, Groman urged people to turn to policy for a solution.

 

“The goal here is to maximize benefits and minimize harm. We do not have a policy, legal or regulatory framework in this country that produces incentives to get there” – Marc Groman

 

Countering Groman’s stance, Romine asked the panel about the potential for a technological solution.

 

Felten suggested we start by seeking to better understand and apply statistical information control and build tools that allow people to interact with their data and mitigate negative impacts. Park, who has a particular interest in cryptographic privacy tools named a number of ways cryptography could help in this regard.

 

“Cryptography provides a toolkit to build systems that have configurations of information flows and include more fine-grained control over access”. – Sunoo Park

 

One of the tools could be zero-knowledge proofs, which allow the partial sharing of data while keeping other aspects secret from entities. She gave the example of a bouncer checking IDs to get into a bar – through zero-knowledge proofs you could prove that you are 21 without sharing your address or birthday also listed on the ID.

 

Park cautioned that while cryptography provides “a larger solution space that we can use to build privacy” it does not answer the question of what sorts of things we should build using these tools, or what forms of information we consider appropriate or desirable to share. That is something we have to work out as a society and a matter of policy.

 

Lastly, the panelists were asked why people should care. What if they have nothing to hide? Earning a laugh from the crowd, Felten joked that everyone has something to hide. In a more serious note, he continued highlighting the potential harm in data profiling.

 

“People out there are building a comprehensive model of who you are and what you’re likely to do.” – Ed Felten

 

Already a terrifying thought, these assumptions can be wrong and sometimes limit opportunities and “freedom of action” in the future. Groman pointed out another common thread throughout the panels’ discussions – the importance of realizing that some communities are being disproportionately impacted. The stakes can be higher to keep some data private whether it be for sexual orientation, gender, race, or abused women or children.

 

During the Q&A, former speaker from panel 3, Ben Zorn, circled back to the benefits of data being used to train AI. He asked about what could be done about private information being leaked through the datasets being used to train AI.

 

Felten pointed out that unless you are using a rigorous method to intentionally stop the trickle down of information, then the information is going to flow. That is why it is so important to focus on building rigorous and provable methods of things like privacy-preserving machine learning and interfaces to control the trickle-down effect.

 

Megas summed it up perfectly, that in the end we can’t train everybody, but we can provide people with a framework that enables them to think about risk and give them tools to give them greater control over their data. You can watch the full recording on the  CCC web page or on NITRD’s YouTube channel.

 

Be on the lookout for the final blog of the series, Panel 5: How Technology Can Benefit Society: Broadening Perspectives in Fundamental Research.

NITRD’s 30th Anniversary Symposium Recap – Panel 4: Privacy and the Internet of Things (IoT)

Comments are closed.