This year’s EuroDIG was hosted by the Communications Regulatory Authority of Lithuania (RRT) in cooperation with the Ministry of Foreign Affairs of Lithuania, the Ministry of Transport and Communications of Lithuania, the Public Institution GoVilnius, the Ministry of the Economy and Innovation of the Republic of Lithuania, and the Information Society Development Committee.
On Tuesday, 18 June 2024, an Insafe delegation co-hosted a workshop session on protecting vulnerable groups online from harmful content – new (technical) approaches.
Protecting vulnerable groups (such as children, young people, women and migrants), from online harm while avoiding mass surveillance and restrictions on free speech is a huge challenge for industry, internet providers, regulators, and users. New technologies, and especially artificial intelligence (AI) tools, could help to automatically identify inappropriate content for children and block access to that content for child users. AI bots could also intervene if they identify a conversation as containing elements of grooming. Disinformation could be automatically fact-checked and annotated, and illegal content could not only be banned but also reported to law enforcement (or inform the victims so they can choose to report it to law enforcement). Online violence could be stopped immediately - without letting the harm happen rather than prosecuting it later. The workshop discussed several new technical approaches that may (or may not) reduce the impact of harmful content online. It also considered effective ways to implement these technical approaches from both network and user perspectives.
Against this background, an expert panel moderated by Torsten Krause (Project Consultant Children's Rights in the Digital World, Stiftung Digital Chances) and composed of Anna Rywczyńska (Coordinator of Polish Safer Internet Centre at NASK), Žydrūnas Tamašauskas (Chief Technology Officer at Oxylabs), and Andrew Campling (Independent Trustee at the Internet Watch Foundation) engaged in a dynamic discussion with the audience.
The main takeaways from the session were summarised by Sofia Rasgado (Coordinator of the Portuguese Safer Internet Centre at the Portuguese National Cybersecurity Centre). Key points of discussion were as follows:
- Type of content. Various harmful types of content were discussed including child sexual abuse material (CSAM) and self-generated abuse material that can be voluntarily produced, shared and misused, or created in response to coercion by online organiser groups (sextortion). Both can result in severe consequences, such as suicidal actions. Pathological content (such as live broadcasting content depicting violence, alcohol consumption, sexual abuse, and so on) was also mentioned.
- New technical approaches. Many new approaches are being developed and implemented to combat harmful content, which also aim to be privacy respectful. Indeed, some are already being used by various platforms. There was particular mention of age verification, client-side scanning, and privacy issues regarding face verification and false positives. Some systems based on AI and machine learning are already being deployed to search for CSAM and to help combat harmful content online. Equally, new models are being trained and built to tackle the issue of age assurance of users, and to detect CSAM. However, despite these advances, human intervention is still required. Concerns about privacy protection bypassing cybersecurity and open AI were also addressed, as well as typical concerns linked to the use of AI, such as bias, the quality of data sets, accuracy, and, importantly, inclusivity and accessibility.
- Establishing good practices. The Polish Safer Internet Centre was cited as an example of good practice in this field, as it provides a comprehensive approach, combining awareness centre, helpline and hotline services, through a cooperation that has been in place for more than 20 years. The centre’s approach also highlights the importance of the existing agreements with law enforcement, acting as trusted flaggers for selected platforms, and strong cooperation with schools, namely with youth representatives and teachers, alongside establishing an advisory board and a parental advisory board. Building a strong network is the key to fighting harmful content online.
- Intervention by political authorities. Some examples of political interventions were mentioned, for example:
- The European ID System seems to be a long-term possible solution for privacy options as it is based on a set of advanced techniques.
- The double-blind approach to age assurance systems which addresses the privacy of the age verification. The German Government is commissioning the development of a demonstrator of this approach.
- Supporting vulnerable groups. For vulnerable groups that may not have a European ID, digital wallet, or personal documents, some systems are already being used to verify their age in online services (even if they cannot yet be used to verify their identity).
- Multistakeholder approaches. There is a need to restore the diversity of multistakeholder approaches involving civil society, political authorities and representations alongside the technical community, and also those who don’t have a technical background, such as psychologists, cybersecurity experts, and cryptographers, for instance.
For further information about EuroDIG and the 2024 programme, please visit www.eurodig.org.
This year’s EuroDIG was hosted by the Communications Regulatory Authority of Lithuania (RRT) in cooperation with the Ministry of Foreign Affairs of Lithuania, the Ministry of Transport and Communications of Lithuania, the Public Institution GoVilnius, the Ministry of the Economy and Innovation of the Republic of Lithuania, and the Information Society Development Committee.
On Tuesday, 18 June 2024, an Insafe delegation co-hosted a workshop session on protecting vulnerable groups online from harmful content – new (technical) approaches.
Protecting vulnerable groups (such as children, young people, women and migrants), from online harm while avoiding mass surveillance and restrictions on free speech is a huge challenge for industry, internet providers, regulators, and users. New technologies, and especially artificial intelligence (AI) tools, could help to automatically identify inappropriate content for children and block access to that content for child users. AI bots could also intervene if they identify a conversation as containing elements of grooming. Disinformation could be automatically fact-checked and annotated, and illegal content could not only be banned but also reported to law enforcement (or inform the victims so they can choose to report it to law enforcement). Online violence could be stopped immediately - without letting the harm happen rather than prosecuting it later. The workshop discussed several new technical approaches that may (or may not) reduce the impact of harmful content online. It also considered effective ways to implement these technical approaches from both network and user perspectives.
Against this background, an expert panel moderated by Torsten Krause (Project Consultant Children's Rights in the Digital World, Stiftung Digital Chances) and composed of Anna Rywczyńska (Coordinator of Polish Safer Internet Centre at NASK), Žydrūnas Tamašauskas (Chief Technology Officer at Oxylabs), and Andrew Campling (Independent Trustee at the Internet Watch Foundation) engaged in a dynamic discussion with the audience.
The main takeaways from the session were summarised by Sofia Rasgado (Coordinator of the Portuguese Safer Internet Centre at the Portuguese National Cybersecurity Centre). Key points of discussion were as follows:
- Type of content. Various harmful types of content were discussed including child sexual abuse material (CSAM) and self-generated abuse material that can be voluntarily produced, shared and misused, or created in response to coercion by online organiser groups (sextortion). Both can result in severe consequences, such as suicidal actions. Pathological content (such as live broadcasting content depicting violence, alcohol consumption, sexual abuse, and so on) was also mentioned.
- New technical approaches. Many new approaches are being developed and implemented to combat harmful content, which also aim to be privacy respectful. Indeed, some are already being used by various platforms. There was particular mention of age verification, client-side scanning, and privacy issues regarding face verification and false positives. Some systems based on AI and machine learning are already being deployed to search for CSAM and to help combat harmful content online. Equally, new models are being trained and built to tackle the issue of age assurance of users, and to detect CSAM. However, despite these advances, human intervention is still required. Concerns about privacy protection bypassing cybersecurity and open AI were also addressed, as well as typical concerns linked to the use of AI, such as bias, the quality of data sets, accuracy, and, importantly, inclusivity and accessibility.
- Establishing good practices. The Polish Safer Internet Centre was cited as an example of good practice in this field, as it provides a comprehensive approach, combining awareness centre, helpline and hotline services, through a cooperation that has been in place for more than 20 years. The centre’s approach also highlights the importance of the existing agreements with law enforcement, acting as trusted flaggers for selected platforms, and strong cooperation with schools, namely with youth representatives and teachers, alongside establishing an advisory board and a parental advisory board. Building a strong network is the key to fighting harmful content online.
- Intervention by political authorities. Some examples of political interventions were mentioned, for example:
- The European ID System seems to be a long-term possible solution for privacy options as it is based on a set of advanced techniques.
- The double-blind approach to age assurance systems which addresses the privacy of the age verification. The German Government is commissioning the development of a demonstrator of this approach.
- Supporting vulnerable groups. For vulnerable groups that may not have a European ID, digital wallet, or personal documents, some systems are already being used to verify their age in online services (even if they cannot yet be used to verify their identity).
- Multistakeholder approaches. There is a need to restore the diversity of multistakeholder approaches involving civil society, political authorities and representations alongside the technical community, and also those who don’t have a technical background, such as psychologists, cybersecurity experts, and cryptographers, for instance.
For further information about EuroDIG and the 2024 programme, please visit www.eurodig.org.
- Related content
- CSAM (child sexual abuse material) EuroDIG artificial intelligence (AI) privacy settings vulnerable users