The Tip of the AI‑ceberg: 4 Ways We’re Experimenting with AI
Around the holiday season, we like to send out a company card that pays tribute to the past year and shares our hopes for the coming one. In 2019, we wished for a less divided world. In 2020, we shared a company Zoom collage. And in 2021, a joke about jabs. This past year was no different.
At the end of 2022, we toyed around with ideas around soaring energy costs, mounting climate concerns, and of course, the emergence of artificial intelligence tools like DALL-E.
But in the end, a heartwarming Frosty the snowman plot won out over concepts like AI, in fear that it would be too “niche” to resonate with our global audience. Boy, were we wrong.
To call AI “viral” would be an understatement, or perhaps the perfect descriptor for its infectious nature. Over the last 8 months, the term “AI” has been Googled 300% more than it has in the last 5 years. Everyday there seems to be a new AI tool or headline. AI is helping people create videos, slide decks and music. AI can help you write marketing copy or animate faces in photographs. There’s even a tool that rates the resemblance of your voice to that of Freddie Mercury’s. Amidst this slurry of innovations are concerns over data privacy, academic integrity, and job security for creatives and coders alike.
For public health, the potential is endless. Disease detection and diagnosis, epidemic surveillance and outbreak prediction, health system optimisation, vaccine development and more can all be revolutionised using AI.
For our eclectic team of behavioural scientists, public health experts, human-centred designers, and communication specialists, it presents a unique opportunity to explore what can be optim-AI-sed and what should be left to the humans.
We asked a few members of our team to share how they’ve been bringing AI into their work. Here’s what they had to say:
Pictures worth 1000 worries
Jim, Creative Director
I started experimenting with image-generating AI tools when DALL-E and Midjourney first came on the scene. If you don’t have the time or budget for custom illustration, and stock imagery isn’t quite capturing the specific characteristics of the people you’re working with—AI comes in somewhat handy. I recently used Midjourney to create avatars to represent users in a user journey map. It gave me a nice base to which I could add more culturally representative nuance. So while the avatar is still a work of fiction, it does have a bit more life and a bit more truth than a generic stock illustration, icon or photography.
Photos generated using Dall-E, and photoshopped to remove some questionable AI artefacts.
We also leaned on Midjourney to produce mock maps of fictitious places for training case studies. The generated topographic maps were then lightly edited, adding details such as our fictitious place names and key roads. Illustrating a fictitious topographic map from scratch is an expensive endeavour, and something we would struggle to include in this kind of training. Normally, we’d include a simple vector illustration of our mock location with a few parameters that we wanted participants to plan for, such as a handful of towns connected by roads or a flooded river. In this case, the hope is that a more detailed map will encourage participants to think about how remoteness and geographic challenges might influence their work. These maps better reflect real challenges that our participants face daily, making the overall training richer and more effective.
It’s a use case that I’m much more comfortable with and one that is less complex than attempting to represent people.
Recently, we wrapped up a project on adolescent reproductive and mental health in Ghana. We were lucky enough to get some great photos back from the field research and testing. However, using those images to communicate about our work in these more sensitive topics could genuinely put the people pictured at risk. I can see the potential for AI here, to produce representations of humanity that help generate genuine empathy, without compromising the safety and privacy of real people.
On one hand, AI lets us quickly create imagery that better represents the people we’re working with in places where the best option might otherwise be a standard icon. But at the same time, because they’re not true images, there’s a lot of ethical fuzziness. Beyond just copyright fears, these Western-based tools are likely over-sampling Western imagery. It’s hard not to look at these images and wonder if they’re constructions of a white gaze, muddied by Western references, resulting in visuals that are steeped in stereotypes and misrepresenting features, clothing, and expressions of people in places I’ve never visited. And the uneasiness doesn’t stop there. How can we, as a company that champions connection with and advocacy for real people, use images of fake people? To what degree does an AI generated-image accidentally closely resemble a real individual? How much is AI stealing the work (and livelihoods!) of real artists?
I don’t see clear answers to any of these questions. I also don’t see text-to-image services going away anytime soon. I do hope that these AI programmes explore some kind of shared royalty system to compensate the artists they’ve built their companies on.
Hands-free with Fred
Sherine, Director
When I’m conducting an interview over Zoom, I always find note-taking to be the hardest part. Connecting virtually is challenging enough—having to ask questions, actively listen and engage, all while taking detailed notes, adds another layer of complexity to an already stressful situation. The pressure to jot everything down can cause you to miss out on important details and forget to ask even more important questions. Not to mention, it makes it more difficult to establish a human connection with the interviewee. As a totally remote company that relies on the power of these interactions and the small, human details they unearth—mastering the Zoom interview is key.
To mitigate this challenge in the past, we often bring on another member of our team to take notes. Recently, we’ve started calling in interview help from Fred, the AI notetaker from Fireflies. Talking without the burden of typing was a foreign feeling, but a great one. With Fred, I felt like I was a much more relaxed, present and effective interviewer, which led to a more enjoyable and insightful conversation on both sides of the screen.
The resulting transcript is pretty spot-on, though it doesn’t thoroughly capture all the nuances of a conversation. You still have to go back and double-check it, so you’re not totally liberated from note-taking shackles. But it’s a step towards greater connection and efficiency. However, I do wonder if the next iteration of Fred will eliminate the need for me to be present at all.
Scientists for ChatGPT
Lydia, Behavioural Scientist
When it comes to asking questions, we like to get really, really specific. For example, we reframe common design challenges like “how might we encourage adolescents to visit a health facility?” by using contextual features that our research has revealed can drive behaviour. That question can then become something like, “how might we build self-efficacy for adolescents to make healthy decisions?”
To answer this question, the first thing you have to do is look back. Have there been any validated interventions to address that particular behavioural barrier? I’ve turned to ChatGPT as a starting point to find what has already been proven to work. I might ask,“What are some proven interventions for increasing self-efficacy in young people?” It spits out a list that I can then build off of. How can we make something that might be easier to use, more attractive, more social, or fit better into the lives of the people we’re designing with?
I’d recommend ChatGPT as a tool to kick off your research, kind of like a clearer version of Google. It’s helpful for these types of simple, automated tasks that are essentially searching and/or summarising, but not for complex tasks like developing solutions.
Scientists against ChatGPT
Ipsitaa, Behvaioural Scientist
The first time I experimented with ChatGPT was to see if it could speed up my lit review process. I was researching how colour impacts consumer behaviour. I asked it to review the literature and provide a summary of the important studies and their methodologies, and ChatGPT responded with, “Sure! Here is a summary of some important studies on the role of colour in consumer good packaging.” It gave me a list of four studies that looked really good. I was shocked. And I was shocked again when I turned to Google Scholar and found that three out of four of those studies didn’t exist. Maybe these studies weren’t open domain or searchable, or perhaps it didn’t recognise the data it was referring to, but I genuinely couldn’t find these papers. I found it quite funny that it had very confidently provided me with studies that didn’t exist. All that to say, don’t trust everything ChatGPT says. Make sure you double check your information!
Image-making, note-taking, and brainstorming is just the tip of the AI-ceberg. We’re eager to explore the opportunities and limits of AI in our work and see how our creative and scientific processes might evolve with this new technology. Can AI help us better connect with communities? Can it help us co-create solutions that empower people to engage in healthy behaviour?
We’d love to hear about where your experiment-AI-tions have taken you! Is it an angel or a devil in disguise? Let us know on Twitter or LinkedIn.
#AIatWork