Events

Tech Leader Chats: Creativity in the Age of AI with Kristine Howard

Title of talk with photo of speaker

For the last two years, the tech industry has been telling us to hand over our creative spark to machines. But what are we giving up when we do that? And are we being honest with ourselves - and our users - about what AI can and can't do?

The way we talk about AI matters. The stories we tell about technology shape how people use it, trust it, and integrate it into their lives. As tech leaders, we have a unique opportunity to guide this conversation and help our teams and users understand both the possibilities and limitations of these new tools.

In this Tech Leaders Chat, Kristine Howard, who recently wrapped up her role as Head of Developer Relations APJ at AWS, shares her thoughts on AI. She raises a few problems with the current status quo, and challenges whether we should be using AI differently, or not at all.

She walks us through some of Silicon Valley's fumbles and mistakes, and inspires us to think more deeply about the way we position this revolutionary technology for the masses.

In this session, Kris shares:

  • What Silicon Valley gets wrong about creativity and AI
  • Why human creativity matters (and always will)
  • How we can do better when talking about AI
  • Real examples of AI messaging gone wrong

About the speaker

Kris Howard worked in the Australian tech industry for more than 20 years at companies like Channel Nine, Canva, and AWS, where she most recently led the APAC Developer Relations team. She's now trying on early retirement, which gives her a lot more time to think about technology and how we as tech leaders can do better. She helps support YOW! Conferences, bringing world-class tech education to the Australian developer community. As a co-organizer of Sydney Technology Leaders (one of the largest CTO-focused Meetup groups in the world), Kris loves bringing tech leaders together to share ideas and solve problems.

Creativity in the Age of AI 

See below for:

  • Key takeaways from the talk
  • The recording
  • A transcript of the talk
Key takeaways
3 considerations to make when using AI

#1: Consider how you are using LLMs and what the long-term effects might be

#2: Think critically about the systems you design and build

#3: Outsourcing writing can mean outsourcing thinking

Recording and slides

You can view the slides from the talk here - and see below for the full recording.

Resources:
Transcript

>> Kris: Good morning, everybody. For those of you who don't know me, my name is Kris Howard. I, despite the American accent, have been living and working in the Sydney tech industry for most of the last 20 years. I'm happy Thanksgiving, by the way. And, as of three, four months ago now, I am retired, which is amazing. I'll just. That's another, that's a talk for another day. But, it's going exactly as well as I hoped it would, which is great. This is a talk I gave a few months back at an event in Sydney called the Future Tech Collective. And it’s sort of synthesized a bunch of stuff that I've been thinking about with regards to, you know, the last two years we've seen with the, with the rise of large language models present predominantly.

But I want to start by not talking about machine, learning or big data. I want to start by talking about one of my favorite books. In fact, some of you know, I'm a really big fan of the author Roald Dahl. In fact, I've had a website, Roald Dahl Fans for, since 1996. So coming up on 30 years, that website is. And so one of the stories he wrote, and maybe you don't know this, Roald had a really successful career writing short stories before he became a children's book author. And so he wrote this story in 1953, in the book Someone Like You. 

I'll give you a very quick recap of the story. The story is about a guy named Adolf Knipe, who is a computer genius. But he actually really longs to be a writer. He's a frustrated writer, he can't get published, but he's a computer genius. So he goes to his boss one day, Mr. Bollin, and he says, I have an idea to write a computer, to build a computer that will write stories. Remember, in 1953 computers were sort of, you know, becoming a big deal. So he ends up doing it. He ends up pulling it off of building this sort of typewriter with buttons and knobs that can spew out a story. And so he succeeds and they set up a publishing company and they start churning out mass produced literature. And it's wildly successful. You know, he talks about, I can turn a knob and turn up the amount of romance or press a pedal and the amount of suspense will go up. And they begin making thousands and thousands of dollars. And at the final part of the story, they decided that the next obvious step in their world domination is to buy out real authors. So go to like Stephen King and just pay him to never write again, to give them his name so they can churn out stories in his style. He just gets a cut of the profits. And so the surprise everybody knows, like before M. Night Shyamalan, Roald Dahl had the sting in the tale on all his stories. 

The surprise is that you find out that the narrator of this story is an author himself and that over half the stories published in the English language in the world of the story are now written by Adolph Knipe on his great automatic grammator. And the struggling writer is listening to his nine hungry children cry, trying to resist the urge to sign that golden contract on the dusk. Give us strength, O Lord. He prays for all true artists to let our children starve. So I want to show you that, was 1953, a much more modern work of art. This was an ad that Google ran during the Olympics just a few months ago. It's on YouTube. You can watch it. and it features a little girl, a little American girl who wants to write a letter to her idol. Who's this Olympic hurdler Sydney McLaughlin Lavroone. So she wants to be here. She's inspired by her. And so the commercials, like, the little girl's dad wants to help her write a letter to her favorite athlete. And so what does he do? Does he sit down with her? Does he write the letter? No, he pulls up Gemini and uses it to write the fan letter to her favorite athlete. 

Right. Well, I don't like that. I was not the only one who had a negative reaction to that commercial. This is a really great rant Alexander Petri wrote in the Washington Post. She said it made her want to throw a sledgehammer through her TV every time she saw it. And in fact, they had to turn off comments on the ad. They took it off TV immediately. If you go to YouTube, they've hidden all the down votes and the comments. You should read it. But this bit I really liked here. What will these buffoons come up with next? Gemini, propose for me. Gemini, tell my parents I love them. Lying on your deathbed, Gemini, write a letter to my children saying all the things I wish I'd been able to tell them. What was my favorite thing about being alive?

Some of the issues with AI today

>>Kris: And that really was something that inspired this talk today. I am not here to talk about legitimate uses for machine learning models. You know, coding assistance, recording and summarizing meeting minutes, handling customer support inquiries, any of the millions of legitimate machine learning tasks that, I mean, even the old ones before we called it AI, things like computer or speech recognition that we've rebranded as AI instead. I want to talk about how we as an industry are actually promoting and marketing this technology, specifically large language models with ads like this one. What impression are we giving to all the people who don't understand the limitations and problems of these models? Why do we keep pushing this technology as a replacement for human thought, human creativity, and human connection? Because guess what? In case you thought that the whole industry learned a lesson from that Google advert, we definitely did not. I took this screenshot last night. Can you imagine that, like at Thanksgiving today in the US, sitting there with your phone using Chat GPT to have a conversation with your racist uncle? You know, I am thankful that I'm not so devoid of human emotion that I need a chatbot to interact with my family. And also, I'm pretty sure that image is AI generated. Those people have nine different lamps in their dining room. So I want to talk briefly. Look, you know this and I know this, but I want to talk about some of the issues with AI today. I want to get to the big ones that are really important to me. And one of the challenges with a talk like this is it changes so much every single day. I have a bookmark folder called AI Ran that just keeps getting longer and longer. And as you saw, I was adding examples, as most recently as yesterday.

You can't trust images anymore, says Michael Sopranos

>> Krist: First one, of course, you can't trust images anymore. We realized this. if you didn't see it, the Verge had a really great review of Google's new Pixel 9 camera, which has the magic editor built right in they said that anyone who buys a Pixel 9 will have access to the easiest, breeziest user interface for top tier lives built right into their device. And they showed how you could use it to invent, images of people using drugs or tanks, shooting guns. And all of us watching this, we grew up in an era where a photograph was by default a representation of the truth. You know, when I get in my go get car and the tank is under a quarter of a tank, the way I'm supposed to prove that is by taking a photo of it with my phone. You know, we knew of course that Photoshop was a possible photo that has been possible for years and we all know that. 

But deep fakes, but they're sort of outliers. You know, it took specialized tools and specialized exactly like, yeah, women's magazine covers. We know that airbrushing is a thing, but fake is seen as the exception, not the rule. You, if I say the words Tiananmen Square, you are probably gonna be thinking about the same photograph as I am if I say Abu Ghraib or Napalm Girl. You know, these images have defined wars and revolutions. They encapsulate truth to a degree that's really impossible to express with words. And they were pivotal. We put so much value in them. And so I mean the default, the default for a photo is that it's something that's true and that's about to flip. You know, from pretty much this year onwards, the default assumption about any image is that it's fake. Because creating realistic and believable fake photos isn't completely trivial and we are not prepared for what comes after the next. Abu Ghraib is going to be buried under a sea of AI generated war crime snuff. The next George Floyd is going to go unnoticed and unindicated. And the Verge's review ends with we are fucked. Apologize for that one. Of course you also can't trust facts and citations anymore. 

This was a recent example of LJ Hooker listing, you know, homes for sale in Australia and just making up facts about the neighborhood, completely hallucinating them. Because as you know and I know you know, these are autocomplete. They're basically, they make up facts, they have no underlying knowledge or abstraction level of understanding of the world. It's just putting together words that semantically are likely to occur next to each other. so facts and citations can't trust. That goes for science too. We are seeing examples that this is being used to fabricate research and spread disinformation. This was a study, just a few months back, found nearly 140 papers on Google Scholar, that appear to be AI generated. And 57% of those covered topics like health, computational tech and the environment. You know, areas that are relevant to and could influence policy decisions. And of course, people have always faked data. People have always falsified studies, but it's now exponentially easier to do this. and the

>> Kris: Researchers say the abundance of fabricated studies seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. And look, you're a technical person. You know that when you use chat GPT, you need to double check the claims. You don't inherently trust what it fits out. 

But if AI generated text is presented as vetted academic research in a popular, scholarly database, why would you even think to verify that what you're reading is true? Of course models are trained on stolen data. There are a number of pending lawsuits against all of these companies due to the use of authors and artists' work. This story broke literally last week. A popular data set, this data set that was used to train many of the models included every film nominated for best picture for the last, 66 years. Over 600 episodes of the Simpsons, 170 episodes of Seinfeld, every episode of the Wire, the Sopranos, and Breaking Bad. I'm sure the writers of those shows really appreciate that. So if you wonder why many of the guilds in Hollywood have been striking over AI, this is why these artists deserve to be paid for their work, not have it used to train models that could very easily be used to replace them and put them out of jobs. An example much closer to home just a few months back, the Books three library that was used to train the Met Islam model, among others, included more than 180,000 pirated ebooks, including many from Australian writers like Geraldine Brooks, Leanne Moriarty, who wrote Big Little Les. Do you know what the average income for an Australian writer is in this country? $18,200 per year. One of them said in that article. If tech companies are able to generate their own books and stories, why would they sell anything else? Why would they give prominence in the market to any sort of competing efforts? I don't see why they would talk about letting your children starve. And look, maybe you think that is totally defensible fair use. These AI companies aren't doing anything unethical which I'm sure they could totally prove if they hadn't accidentally erased a ton of the evidence. Ah, that was subpoenaed in one of these recent lawsuits. Yeah, totally, totally accidental.

Language models embody covert racism in the form of dialect prejudice, study shows

>> Kris: Trained on biased data. This is a really, really interesting one. This paper came out back in August. You know, lots of people, you know, there's been talk for a number of years now that when you encode bias because humanity is biased, you know, you get racism in machine learning models. But, and most of it is overt. You know, you ask and allow them to describe someone of a particular race. Or you ask them, you know, a doctor is always a man. Or the MIT visual data set, that was a big one, that was used for training, had racist and derogatory slurs on images of black people, Asian people and women. But this paper focused on a more subtle covert racism. It's wildly racist. You know, they demonstrated that language models embody covert racism in the form of dialect prejudice. 

So basically, speakers of African American English, you know, vernacular, if you give them a sample of that writing, the stereotypes it says about that person are more negative than any human stereotypes about African Americans ever experimentally recorded. Even though if you ask the model what it thinks you about African Americans, it says very positive things. So basically, what the models say about people of a particular race is very different from what it covertly associates with them. And you can think about this dialect prejudice, the potential for harmful consequences, like assigning people less prestigious jobs, maybe even convict them of crimes or sentence them to death. It's really important. And look, the writing they create is crap. I'm sorry. it is because it's trained on a huge corpus of training data. All it can ever do, all a large/Pages model can ever do with your writing is make it sound average at best.

Bruce Sterling claims he's identified a new dialect. He calls it “Delvish”

>> Kris: Bruce Sterling is American science fiction writer. Some of you may have read some of his. One of the founders of cyberpunk here wrote this really cool blog post in July. He's taking a rather sort of philosophical view of this. He calls writing from LLM a new dialect. He claims he's identified a new dialect. He calls it the world's first patois of non-human origin. And so he goes through and he actually has data here that he backs it up with. And he identifies this language, calls it “Delvish”, because guess what? It tends to use the word delve a lot. There's that graph there. Of papers with the word delve in the title or abstract, which, yeah, shows you something. So some of the top Delvish nouns, advent, forefront, insights, trajectory, you adjectives like multifaceted, transformative, rapid, Delvish really likes to delve into everything. It compares things to a tapestry. You're always embarking on a journey.

>> Kris: Everything is elevated or embraced with overdramatic titles. And if you think that sounds like LinkedIn, you are right. This came literally yesterday. Yes, they estimate that like 50% of the content on LinkedIn today is AI generated slop. 

So think about this. The Delvish is only going to get worse because LinkedIn are now scraping that content to feed into their own models. So we're now going to get Delvish fed into the models to train the next generation. And of course it's all doing unimaginable environmental damage. You know, Sam Altman admitted earlier this year, the least well kept secret in the world is that the AI energy is, the AI industry is headed for a massive energy crisis. The energy they used to train GPT4 could have powered 5,000American homes for a year. You know, within years, large AI systems are going to need as much electricity as entire nations. And not just energy. They also need enormous amounts of fresh water to cool processors to generate electricity. And we're just seeing that a lot of these big tech companies who had previously made lofty environmental claims are now walking them back because they know they can't hit them. 

Altman famously is banking on nuclear fusion to solve the problem. If you mention this to a true believer, they'll say nuclear is the way out here. But most research believe he's being wildly optimistic there. You know, he's going to spend like $7 trillion on it, which would be the largest investment in all of human history. and even if it works, we're not going to feel the effects until the middle of the century. Another 25 years say, it's not going to do anything about the water crisis. Also all the toxic materials generated by mining and the toxic waste we're generating. The UN has estimated that by 2030, half of the world's population are going to be facing severe water stress. And you know, as I said as someone in parts of Sydney Airport, yesterday was the hottest place on planet Earth. It was recorded. And so we need a big focus now on improving the sustainability of this technology with new strong regulation by legislators. And look, a lot of people can write that off. As well. You know, they say, look, it's too late. The genie's out of the bottle. I should just give up and give in and embrace this future.

Outsourcing your writing means outsourcing your thinking

>>Kris: This is the big one for me. I feel really strongly that outsourcing your writing means outsourcing your thinking. It would be very easy for me to prompt ChatGPT, to prompt Canva, any number of services to just spit out. This presentation might even be better than the one I did. You know, I would have saved myself a lot of time and effort. And that's one of the big myths of generative AI. 

This is called the productivity myth. And this myth says that saving time is the most important goal for any of us, that anything we spend time on, we should automate the crap out of that. Even things like writing a fan letter, doing your homework, talking to your family, right? But the goal of communication, the goal of writing isn't simply to churn out completed pages. It isn't the output. You know, why do we give students essays? It's not because the world needs more crappy essays on Hamlet, right? It's because the process of researching a topic, of organizing your thoughts, of constructing an argument, these help you practice critical thinking. And so the end result, the page is the end result of our thinking about the work and our words and why we're writing. So writing is thinking in a form that you can share with other people. And to take away the ability to write for yourself is to take away the ability to think for yourself.

>> Kris: At PyCon AU, just a week ago, the keynote was given by my friend, Dr. Linda McIver, the founder of the Australian Data Science Education Institute. Linda is an educator. She's thought really deeply about how we teach students, how we measure their progress. And Linda talks about how reality is messy and complicated. Data sets are messy and complicated. And we need a society to be strong on critical thinking, to see through misinformation, to know enough about technology to know what questions to ask. 

And so what she's doing is she's showing teachers how to engage students with real data sets to solve real problems and to evaluate and question the conclusions they come to. So instead of grading a kid based on them getting the right answer, they get measured on how well they can explain how their solution works, what doesn't work about it, what they could do better and what other explanations there might be for what they're seeing LLMs don't do this. They simplify everything to something that sounds correct even if it's wrong. They don't question their assumptions, they don't iterate, they don't improve the way a human would. It's just autocomplete. If you ask your question again in a slightly different way, you'll get a completely different, perfectly plausible sounding answer without having to exercise a single neuron in your head. 

And I’ll give you a personal example for this. I used to work for AWS and I know at least one ex AWS person. Amazon persons on this call. One of the things Amazon does, and maybe some of you have seen it, is they call it a culture of innovation presentation. This is where we talk about how Amazon innovates. And I gave these talks, I gave many of these talks over my time there. And one of the things we talk about is the working backwards methodology. This is the process Amazon uses to develop new products. And as part of this they create something they call the PR, press release. Frequently asked questions. 

Every product or service Amazon releases goes through this process of iterating on this document. You know, 20, 30, 50 times. Every time you go through it, the value proposition becomes more clear, the customer benefit becomes more clear. And I think'it's like, Amazon does some stuff that I think is not great. It's one of the unabashedably great things that I actually think I'm a fan of. Well, earlier this year there was a big push internally. How are you using Gen AAI to 10x yourself? You know, everybody's got to use these new tools. And one of the first things, one of my colleagues came up with was, well, let's automate creating a PRFAQ here. You just put in your product idea on the problem and it'll spit out thePR. Great. It saves us months of effort. And I was like, are you insane? Like to me that's so completely antithetical to this whole methodology, to this whole process. It's assuming that the document, the artifact, is just the goal you're trying to get to, as opposed to a document of the journey you took along the way to get there. And those discussions, the feedback you get, the talking to other people, that's where the real value is in this process. so I had huge problems with that. And yeah, it was one of the signs that maybe, maybe it was time for me to move on to different pastures.

There's a problem with how we are positioning AI as an industry

>> Kris: And that sort of brings me to my other big problem with how we are positioning AI as an industry. Look, you can see behind me, I'm a creative person. I think that creativity, communication are at the heart of what it means to be a human being. And these tools reduce everything down to autocomplete. And LLM does not experience satisfaction, pleasure, guilt, responsibility, accountability for what it produces. It does not use language in the way humans do. So look, it's sitting here in front of you. I had to think about the words I was going to use. 

I was saying to the organizer before, like I was worried that some of you might have come into this with the expectation of we're just going to hear how to use AI in my DevOps pipeline. You know, I had to think about how I was going to position this talk for my audience. Yes, some of you might be all in on this technology. I had to take a social risk. You know, you might disagree with me, you might disagree with how I frame my arguments. You might, it might affect how you think of me. These tensions and frictions and ambiguities, these are at the heart of what it means to be a human being. We need these frictions to resolve disagreements, to have more in-depth understanding of topics, and to confront wrongs. But LLMs don't do that; chat GPT doesn't risk anything when it is prompted and it generates some text. It seeks to achieve nothing but concatenating tokens together into grammatically sound output. There's no intention there behind it to communicate.

Art is something that results from making decisions, right?

>> Kris: And that carries through to art too. This is an amazing piece, please go read it. It was in the New Yorker, at the end of August, Ted Sheian wrote this and as part of it he talks about trying to define what art is. He decided that art is something that results from making decisions. So at a very simple, simple, example, like if you're writing a story, the decision what word comes next, that's a decision, right? Making decisions. And so if you put in a prompt, you know, a 50 word prompt and an LLM generates a 10,000 word story based on the prompt, it has to fill in for all the choices, all the decisions that you're not making. And there are different ways that can do that. You know, one is to take an average of the choices that all the other writers have made, you know, based on this training data that's found on the Internet. and that's bringing everyone down to that average. It's generating “Delvish”, a really bland language. 

Another way of doing it is to engage in style, mimicry where you say write this in the story of Stephen King, which is going to produce something highly derivative. In neither case does it create interesting art. The same goes for visual art as well. And he has a really good example of talking about photography that when cameras came out people argued that it wasn't art as well. And part of the reason there, was because they didn't realize the decisions, the choices that photographers made. 

I think nowadays we much more likely to recognize photography as a unique art form. And people like, oh, well, doesn't that mean that an LLM will eventually get there? You know, I put so many choices and decisions into my prompt, but the whole point of these models is that you get a lot more out of it than you put into it. If you write a 10,000 word prompt to generate a 10,000 word start, you should have just written a story, right? Nobody does that. They write a, at best a couple hundreds and they get thousands out. and that's what prevents them from being effective tools for artists. And of course I'm really annoyed because he uses the great automatic generator in his intro, which for the record, I wrote this talk before I saw this. It was my idea first. but it just shows that it was a really good example.

And speaking of the great automatic generator, I just want to mention we are now at the end of November, at the end of National Novel Writing Month. Some of you might be aware that things have been going on for years where people try and write a novel, 50,000 words in the month of November. They had a huge controversy in September because the organizers of this event came out defending the use of AI to generate your books, your creative writing. They said there's nothing wrong with that and banning it would be non inclusive of those who are disabled or from marginalized communities. So if you're against AI, you're against disabled people. This did not go over well. Lots of established authors, you know, were resigning from the writing board, you know, and then of course it came up, they were being sponsored by a generative AI, prioritizing aid companies. And so one of the writers, you know, one of the disabled writers was saying, this is terribly insulting. It's insulting to imply that the only way members of marginalized communities can get in their foot in the door is through the use of a plagiarism machine. And so of course they ended up immediately backing down. They revised their statement multiple times. They apologized. But as one of the authors said, nano at its heart, encourages people to set aside time to do daily writing practice, to push through barriers of self doubt, fear, boredom, writer's block, and to fearlessly write badly. 

Promoting AI to replace that is taking away the most important part in replacing it with something weird and terrible. But of course, where some see controversy, others see opportunity. This literally came out a week ago. You might have seen that, this new startup, it's going to use generative AI to replace all the work of a publisher. Instead of, you know, having to get your work seen by a publisher where they proofread, did they give you suggestions to improve it? Designing the COVID distributing it, generating an audiobook. Instead of spending six to 18 months going through the actual system, they'll just do it for you in two to three weeks. I'll just use an LLM M to spit all that out. My sister, who's an editor, really found this interesting. And they're claiming they're going to publish 8,000 books in 2025 alone. 

Whole bookstores full of “Delvish” we're gonna have. And how long do you think it'll be before somebody starts, like, offering artists, the rights, a cut of the profits to use their names and just let them take over? Like it's gonna happen? Roald Dahl called it. Maybe you don't see a problem with this. You know who does? D Cave. I want to mention this. Some of you know the Australian artist at Cave, he has a website where he answers fan questions. And last year, some brave soul sent him a song that Chat GPT wrote in the style of Nick Cav and asked him what he thought about it. And Cave said that many fans have done this. And without fail, all of the songs stuck. He said algorithms don't feel, Data doesn't suffer, that GPT has no inner being. It has been nowhere, it has endured nothing. He actually called it replication as travesty, saying that in time, it could perhaps create a song that is, on the surface, indistinguishable from the original. But it will always be a replication, a kind of burlesque. For him. Songs arrive out of suffering, the complex internal human struggle of creation. And he called it a grotesque mockery. and he's actually had a couple different letters about Generative AI. It's worth looking that up. Okay, so look, I've been ranting for a good 20, 25 minutes now. Where does this leave us as an industry, as people who make our living, in many cases from this technology?

Companies are investing billions of dollars in artificial intelligence without clear monetization plans

>> Kris: First, a little bit of a warning for all of you. Like, investors are realizing that this might be a massive bubble. Billions of dollars have been invested in a technology that has yet to turn into a profitable business for anybody but Nvidia. And guess what? Just last week, their stock prices started to go down because people are realizing that we might be at the top end. We're getting diminishing returns out of this. You know, companies like Google, Microsoft, Meta are committing vast amounts of their resources to AI without a clear monetization plan. OpenAI, they may lose $5 billion this year and run out of cash in the next 12 months. unless they get a huge new injection of cash with shares trying desperately to get. At the same time, their leaders are abandoning the company and they are pivoting from a nonprofit to for profit entity. So Wall Street is starting to get skeptical. If your company is gambling on AI as your path to future revenue, I would be very careful about just mitigating your personal risk, about keeping your options open in terms of your career long term. 

I also want to encourage you, think about your own use of these services. Look, some of them were fun. We were talking about this. Vivek was saying, you're right. In haikus, everybody did that. Or we're taking selfies. I made a selfie of myself as a superhero, of course. Canva's magic media. I've created funny images for the Sydney tech leader slides. But I also have a lot of friends who are artists and writers and craftspeople. I'm sure you do too. What do they think about generative AI? Do you think your use of it devalues what they do? Are you using AI generated content for commercial purposes, making it even harder for artists to make a living? And if you're someone who started outsourcing all your writing to ChatGPT. Ah. Are you also outsourcing your thinking? Are you prioritizing output? The end result over the effort, over the practice, over the thinking to get there? What ultimately distinguishes your communication from anybody else's? Are you contributing to the growing sea of “Delvish”? How are you ever gonna get better? You know, develop your own voice, your own critical thinking skills? If you aren't practicing and, you know, what example are you sending to your kids? And if you are an artist or a writer or content creator of any time, how are you controlling how your content is being used? This is, you know, something a lot of people haven't had to think about. 

But I strongly urge you, if you are sharing content online, to review the platforms you're using and consider if you want to opt in, opt out or even block access to your data. This is where most of us are. I think you're actually working on AI projects. And so really what I want here is there are really great projects. I was saying when I gave this talk in September, the other speaker's wife had motor neuron disease and he used Generi to replace her voice to give her her voice back. There were some amazing, amazing usages of some of this technology. I don't want to throw those out, but I really want you to think critically about the systems that you're designing and building. If you are training models, do you know where your training data set has come from? Is there any stolen material in it that might expose you to litigation? If you are training on your customer'data do they have to opt in, which is preferable? 

Are you acting shady and burying an opt out link like deep in the settings and yes, LinkedIn, I am looking at you there. You know, if you're running a web crawler, does it adhere to the robots TXT standard to allow people to opt out of scraping? Are you doing it regardless? Is your use of AI solving a real problem? is it making the world a better place? Are you just doing something flashy and chasing a trend for the sake of it? To put that little sparkle image on your website somewhere, do you need to jam in a large language model with a very simple machine learning algorithm work just as well? And are the architectures you're building efficient and sustainable? Are you contributing to this massive energy and water crisis that we're facing? And for those of us in the leadership position, are you seeing the ROI in these projects? Is the upside actually there? And if not, we should be pushing back, you know, does your company have an official AI policy? Are you transparent with your customers on where you're using AI and where your customer's data is going? And maybe if you're in an industry that's affected by large language models, maybe you should reconsider where should be, whether you should be using these models at all.

ProCreate: We need to be very deliberate about how we talk about AI.

>> Kris: Procreate, many of you know as an Australian tech company that makes graphic design software really popular with artists and illustrators. And earlier this year the CEO James Duda said, he's an Australian, he said, I really fucking hate general of AI. They have an explicit AI statement on their website and have pledged to never integrate generative AI tools. And needless to say, for them, this approach resonates really well with their core customer base. So if you're in an industry that is serving artisans, you might want to think about that. And lastly, I think we need to be very deliberate about how we talk about AI. You are watching this. You know the limitations, this technology, so don't gloss over it for everybody else. Large language models do not reason they are not sentient. We should not pretend that they are. And you know, when you ask ChatGPT to count how many Rs are in a strawberry, you get it wrong. Oh, I'm so sorry. Oh, well, let me recount it for you again. It's pretending to be a human being. You know, it's not. We should be sensitive to people who are being disrupted, acknowledge what is lost when a job gets replaced by an LLM. And we should make sure that our marketing is highlighting the right use cases, the good ones, the ones that make the world a better place. And we aren't overstating the benefits of this technology and we should never, ever try to sell it as a replacement for human creativity and for connection. And of course, like I have a vested interest in this site, as I said, I have a number of websites with thousands of pages of real, genuinely useful content written out of my own brain. 0ai. I like coming up with things. I do think there's room for computers in making art. I don't want to say that there isn't. 

I curated the first Art and Tech day at Linux Conference Australia back here in Sydney in 2018. We had a number of people who were using art, and computers in amazing ways. and I've spent a lot of my career in companies who are going hell for leather down the AI path, like Amazon and Canva. But my argument to all of you watching this is that we, as leaders, have a responsibility to not be uncritical enthusiasts of this technology. We need to make sure our customers, our colleagues and the general public understand the problems and the limitations. We need to ensure the products we build off the back of it are sustainable and have real guardrails. And we need to stop suggesting it be used for anything and everything. We definitely shouldn't be teaching our kids that the best way to express themselves is by going to Gemini and typing in a prompt. Thank you.

Vivek: That was awesome. Thank you so much.

Kris: Went a little over. Sorry about that.

Vivek: No, no, it's all right. That was fantastic. Thank you so much, Kris And I certainly learned a lot. And you can certainly tell that Chat GPT did not write that talk. 

Contributor
Christine Jensen
Christine Jensen
GTM Lead
Support your developers with ethical team analytics.

Start your free trial

Get a demo
Support your developers with ethical team analytics.