Copy

How to Safeguard Our Constituents in Digital Development?

London, UK - September 24 - RSVP Now
Digital innovation has immense potential to open up access to vital services and provide limitless opportunities for our constituents. Digital innovation also has the potential to worsen existing divides, amplify inequalities, and put users at risk of persecution or exploitation.
 
How can the ICT4D community protect those it works with, and not get carried away with the potential of digital tech to the point where we put people at risk?
 
One way is to practice digital Safeguarding - protecting people from unintended harm of technology interventions. By following digital principles such as designing with the user, understanding the ecosystem, and focusing on privacy and security, we can explore the unintended consequences might come out of our digital technology. This could include online abuse, sexual exploitation, harassment or bullying.
 
For instance, DAI’s Frontier Insights research in Sierra Leone found that women using mobile phones are often suspected of having an affair. Therefore, DAI needed to raise awareness of positive mobile phone uses to reduce negative consequences for the safety of the women they sought to help.
 
How can we truly understand the plethora of possible consequences of our technology?
 
Please RSVP now for the next Technology Salon London to better understand new technology consequences, and explore questions, including:
  • Who holds the responsibility for safeguarding?
  • What due diligence needs to be done to recognize threats and mitigations?
  • What are new challenges for safeguarding in a world where all our interactions and data are online?
  • What frameworks are in place for us to understand and protect our users?
  • How can we ensure that content provided doesn’t put users at risk?
  • What questions do we need to ask partners, donors, vendors before partnering?
  • Who is innovating and what are they learning?
Please RSVP now to join an intimate, curated group of leading voices from across the aid and telecoms sector, including these thought leaders:
  • Max Baiden, Innovation & Collaboration Adviser, Save the Children
  • Clara Barnett, Digital Inclusion Advisor, DFID
  • Hans van Hooff, Digital & Innovation Manager, Accenture Development Partnerships
  • Sarah Maguire, Director, Technical Services, Governance, DAI
We'll have hot coffee and catered breakfast treats for a morning rush but seating is limited so please RSVP now. Once we reach our 30-person capacity there will be a waiting list!
 
Collaborating with MNOs
Technology Salon London
8.30 – 10.30am
Tuesday, September 24th, 2019
Accenture London
30 Fenchurch St (map)
London, EC3M 3BD
RSVP is required for attendance, and you will need to bring photo ID for access to the premises.

Automated Decision-Making in Aid:
What Could Possibly Go Wrong?

New York City - September 19 - RSVP Now
Automated Decision Making (or “ADM”) is the use of computerized systems or algorithms to aid in making decisions. Social development organizations and humanitarian agencies are increasingly looking to these kinds of approaches in education, health care, and disaster response to target programs and predict outcomes.
 
Automated Decision Making is still relatively rare in aid and development work, yet practitioners and funders should be concerned now since there are already issues of exclusion, bias, and harmful decisions when ADM has been used in other spaces. ADM can have serious effects, especially when they are used on the most vulnerable populations.
 
ADM and most artificial intelligence systems and related technologies are put in place with minimal oversight and insufficient accountability mechanisms. And most socially-focused organizations have few resources and little capacity to fully understand the implications of these new techniques.
 
Please RSVP now to join the next Technology Salon NYC where we’ll address questions like:
  • What are Automated Decision Making Systems and how do they work?
  • When might automated decision making support social goals in positive ways?
  • What can be done to ensure that automated decision making is not leading to bias or harming vulnerable or historically marginalized individuals or groups?
  • What do donors need to know about these systems to ensure that they are not funding harmful approaches?
  • How can development organizations and humanitarian agencies build the necessary skills and capacities for incorporating these tools? Or should they avoid them?
  • What legislation or other mechanisms could help to ensure that automated decision making does not lead to harm? What should funders be doing to mitigate risks and increase capacities?
Please RSVP now for a lively discussion with the following thought leaders in ADM: We'll have hot coffee and breakfast treats for a morning rush. Seating is limited so please RSVP now.  Once we reach our 35-person capacity there will be a waiting list!
 
Automated Decision Making
September Technology Salon
9:00-11:00am
Thursday, September 19, 2019
Open Society Foundation Offices
224 West 57th Street
New York, NY 10019
RSVP is required for attendance
subscribers

About the Technology Salon


The Technology Salon™ is an intimate, informal, and in-person, discussion between information and communication technology experts and international development professionals, with a focus on both:
  • Technology's impact on donor-sponsored technical assistance delivery, and
  • Private enterprise driven economic development, facilitated by technology.
Our meetings are lively conversations, not boring presentations. Attendance is capped at 35 people - and frank participation with ideas, opinions, and predictions is actively encouraged. 

It's also a great opportunity to meet others motivated to employ technology to solve vexing development problems. Join us today!
 
Copyright © 2019 Technology Salon, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list