The internet is not safe
This year’s Safer Internet Day will be held on February 11. With it, interest groups hope to “promote the safe and positive use of digital technology, especially among children and young people.”
I personally love the initiative, as it aims to foster digital literacy among the young and old users of the internet and also urges politicians to take tech companies to task if they don’t help in making the internet a healthier place for people to explore.
That said, it stands to reason the internet isn’t actually a safe place right now.
To prepare for February 11, perhaps we should discuss some of these running issues that necessitate an actual Safer Internet Day for both kids and adults.
There is no “completely secure” Internet
As a “surfer on the worldwide web,” we might like to think we’re invincible online. The more seemingly tech-savvy you are, the more this feels like truth.
Of course, this is a lie – there is no completely secure internet for an individual, and anyone who thinks they’re invincible and unassailable online is likely deluded.
The thing is, no one is ever completely secure online, and while you don’t think you’ve been hacked, there’s an increasingly likely chance some of your personal information has been or will be compromised by a data breach the more services or sites you use or visit online.
This is likely why Troy Hunt’s Have I Been Pwned – a service you can sign up for to alert you if your internet address or personal information is included in a data breach – is a popular online destination.
In fact, a number of governments, from the United Kingdom, Switzerland, Ireland, Norway, and Australia, among others, have been teaming up or using Have I Been Pwned to monitor government resources in the event of a data breach.
The lie of anonymity online
Adding to the potential risk of being online is the false promise of anonymity at a time when companies sell or trade our personal “anonymized” data.
If you’re savvy enough to breach a system and gather anonymized data from a bunch of breaches, it also stands to reason you can de-anonymize data once you have a large enough trove of information to sift through.
That’s what Dasha Metropolitansky and Kian Attari, students at the Harvard John A. Paulson School of Engineering and Applied Sciences, did.
According to a press release for a class paper they’re working on, the students built a tool which can sift through consumer datasets which were exposed in data breaches and then identify people from it.
Said Attari, “The program takes in a list of personally identifiable information, such as a list of emails or usernames, and searches across the leaks for all the credential data it can find for each person.”
Speaking with Vice, the pair said their tool analyzed thousands of datasets which came from breaches and other data scandals. Even with “anonymized” data, you can sift through the anonymity to find real people in the vast sea of leaked information they’ve collected.
This issue is made worse by a surprising lack of data privacy laws in certain countries, as well as the need to enforce those laws to get companies to step up their security, monitoring, and reporting.
The algorithms might be biased
Perhaps one of the more abstract things to wrap one’s head around is the idea that computer algorithms may not actually be unbiased calculations.
Algorithmic bias is what occurs when errors in code compound themselves and create unfairness or discimination within the algorithm, such as favoring one subset of users over another. The bias can come about from design limitations or, as in some cases, unintended decisions about what data is used to code or train an algorithm.
One case in point is YouTube, which recently revealed it captured about $15 billion in ad revenue.
Reports came out in previous years about how YouTube’s recommendations engine wasn’t all that great.
It wasn’t preventing children using YouTube Kids from being exposed to explicitly violent or weird content masquerading as children’s entertainment. It wasn’t keeping extremist content – which would include content that could politically radicalize a teenager or show them conspiracy theories or other anti-social justice videos – from being popular. It was also having trouble preventing the spread of misinformation or disinformation.
In these cases, there’s been some progress. Youtube recently pledged to improve kid-friendly content by letting creators rate their content, and then backing it up using machine learning to help vet the content as being safe. It’s also gone on the offensive against hateful or supremacist videos and is working to better remove deceptive election-related content.
Reality is complicated
While this all seems like quite the thing to dash one’s spirits, I still believe events like Safer Internet Day can do its tasks well.
Teaching people to be savvy about the Internet, but not cocky. Letting parents be more involved, but not hovering over, their kids’ free time. Making sure government is on the public’s side when it comes to regulating big tech. Big tech realizing there’s more to life than the acquisition of wealth to the detriment of society.
All those are important things. Though reality is complicated, it’s nice to have a reminder that while we are never completely safe, we aren’t at least in the dark about the dangers we face online. – Rappler.com