I recently signed up for MasterClass, as there were more than just a few people giving courses that I'd love to hear from. I started with Steve Martin's course on comedy. I don't think I learned anything specific that I'd call out, but I did get a solid overview of his thought process and I found it very enlightening. I then took Shonda Rhimes Teaches Writing for Television, and found it even more useful, as I'm working on a television screenplay (don't worry, it's for fun. It's going to suck. But I want to do it).
When I was in high school, in the 80s, I was part of a group that hacked into corporate voicemail systems so that we young hackers could communicate. Voicemail was pretty obscure then. You could find me, around lunchtime, at the payphone on campus, picking up and leaving messages. (Note for the young, look up "payphone" if you need to).
UPDATE, 26 September, 2019 - The FTC is suing Match.com for just the situation I describe in this blog post!
Match.com has a fake problem. That is, they have a problem with fake accounts and there is a clear reason why they have, for years, refused to do a single thing about it.
I don't know about you, but while I read about record heat everywhere, I find myself in the San Francisco Bay Area where the temperature has peaked at 70F and not really gotten past that all "Summer" so far.
"Summer" in air quotes.
Sure seems to me I should relocate and be working from a beach somewhere.
Just a quick note on the current AI kerfuffle - the genie is out of the bottle. You can attempt to regulate AI all you want, but those who wish to use it unethically will do so. When you can stand up your own stack in your machine room, you can do what you want.
What's needed right now is immediate pushback on the USE of AI. In the context of entertainment, for eample, writers should demand in their contract that AI may not be used by studios without consent. Actors should be contractually guaranteed the right to their likeness and voice and that AI cannot be used without proper compensation. And there should be penalties for misusing AI (and this is a huge depate on what "misuse" is).
Treat it as a tool you can't control, but focus on what is done with it. Lock that down now or it, too, will get away.
My official position - tools like ChatGPT fall into three buckets for me:
1. Factual generation is a complete fail. I ask for a list of "five things that happened on this day in history" and it just makes stuff up, but insists that it's factual. Nope.
2. Assistance with code and design. Moderately useful. It eliminates a lot of grunt work and is good at templates that I can then customize. Things that I would usually look up in a reference (and I know where to look) are now at my fingertips more easily. Properly used, even buggy code is easy to polish. But don't ask it for architecture. That's not ready for prime time yet. This is, more or less, at the function level. It's good at things like "give me a function implementing the best sort for this kind of data."
3. Creative assistance. This simply rocks. Given a good prompt, I've gotten multi-page creative articles that simply blew me away. Laugh-out-loud funny when I want humor. It actually feels like it understands comedy. It gives me writing prompts that my human friends don't come up with. This is where the technology really shines for me.