3️⃣ AI Ethics with Lauren
Context is complicated - not even humans know the whole picture most of the time! Think of the last time you mistook a street pole for a person, or a pair of shoes in dim light for your cat. Moving towards more robust contexts for AI is certainly a step in the direction of intelligence.
Louis’ observation of the ethical caution required to identify theft in a store using PSG is correct - it gets sticky. If you trained the model on theft arrest records, and this data is from a region with a high rate of arrests due to racial profiling of Black people, the model will incorrectly associate darker skin colors with theft and misidentify theft in its assessment of context. This is a more extreme case, but unfortunately it happens far too often and without awareness of the massive potential for harm. On a less extreme case, a false positive may be produced even if someone is photographed picking up a product in a weird way and it happens to look like theft. Proper precautions and mitigation efforts will avoid these negative scenarios.
On the flip side, a great application of PSG would be closed caption generation for images. In digital contexts, people with sight-related disabilities can more easily access accurate image descriptions without relying on the image owner to add a closed caption. This will help people learn and move through the world with less inhibition, which is definitely a win. It will be interesting to see how this technology progresses and its future applications!
- AI Ethics segment by Lauren Keegan