A series of neural network generated images by DALL-E from OpenAI
“A business analyst gets help from a software developer”
I’ll tell you more about the images above at the end of this article. But first: you might have read my earlier post (link) in which I described an investigation we are conducting to better understand Citizen Developers. That project continues, and there’s an offshoot of it where – depending on your role — I want to ask for your help.
Is your company actively working to build a Citizen Developer ecosystem today? Are you one of the people working to help Citizen Developers succeed? If so, we’re interested in talking to you about your experiences. We’re not looking for proprietary information and, as I said in the earlier article, we’re not looking to sell you anything. We’d just like to understand how you might be supporting the success of Citizen Developers today. That might be technical help, data governance, security, or compliance – whatever way you might be contributing would be interesting to us. We’d like to understand what’s working in the real world and where challenges exist.
Thanks, as always.
So. About those images. I recently signed up for OpenAI’s DALL-E beta program. DALL-E is a neural network trained to generate images from a hypothetical text caption. In short, you supply a textual description and DALL-E generates possible images. The results are wild. In the example above, my input literally was, “A sixteenth century business analyst gets help from a software developer”. The rest was up the DALL-E to fill-in.
DALL-E might be simply a curiosity today, but it serves as an example of how large neural networks can be trained to do complex tasks that might not be obvious or easy for a casual user. If you want to experiment with DALL-E yourself, you can get on the beta waitlist here. Enjoy.
In the meantime, if you think you fit the profile I described, we’d appreciate your perspective. Thanks.