Skip Navigation

Isaac Record

“Isaac

Thoughts about AI

Don't Pretend Social Problems are Technical Ones

It is always tempting to translate a problem from a domain where it seems intractable into a domain where it seems manageable. I think back to the first time I learned about the Laplace transform, a seemingly magical mathematical move that lets you transform a complicated calculus problem into a manageable algebra one. It's a key move in engineering. But sometimes the translation deforms the original problem beyond recognition. Technochauvinism is the idea that the most advanced technology is inherently the best (or even only) response available. That just isn't so! In some of my other work, I've encountered the social change wheel, which is just one alternative model for how to respond to certain kinds of problems. There is much more to say, but perhaps this is enough to nudge us out of complacency for the moment. I urge you to sit with uncertainty for a time.

Voices to Amplify

Further reading

AI Theses to Nail to a Door

  1. Use of current genAI is always unethical
  2. Use of genAI depends on, supports, and creates a system of exploitation
  3. We should work to protect our community from these corporate entities
  4. AI is a medium, and that medium says you don’t have to think or work at all. It isn’t cool like classic lit or hot like cable tv. Its fundamental message is that the answer is already out there, just waiting for the right prompt
  5. AI is a tool. It is not good or bad. Nor is it neutral. But it’s really close to bad in terms of who controls it, who it exploits, and how it earns its keep.
  6. Why are we choosing the worst offenders? The message we are sending is that ethics can be bought. It’s disgusting. Background on the rhetoric they use to capture universities.
  7. There are tools that are developed with creators and individuals in mind. These include experimental genAI that are a generation or two off the cutting edge, as well as tools like Adobe’s Firefly that (at the time I am writing this) (appear to) respect copyright in the training data and in the creations users make.
  8. No one wins a race to the bottom.
  9. Take the dollar amount MSU pays for all of its AI enabled products and distribute it as raises. Subscribe to something with real value.
  10. Don’t push onto individuals the responsibility to solve problems that are social, collective, or institutional.
  11. Aren’t you embarrassed to be left holding the bag by these companies that treat us with contempt?
  12. “Has any institution even acknowledged that the origin stories of AI tools belie the very academic values we’re trying to protect?” Kevin Gannon.
  13. Ethics, a philosophical journal whose topic I bet you can guess, has an ai policy for authors, editors, and reviewers. It focuses on responsibility and confidentiality in broadly ruling out the use of AI.
  14. AI literacy is something we have to build — it isn’t something we are ready to teach. [source]
  15. Why not invest in people and resources?
  16. “None of the core skills associated with humanities education — critical reading, historical analysis, multilingualism, evidence-based argumentation — have become easier to acquire thanks to privately financed education technology. But that sector has smuggled their methods of value creation into our professional lives: automation, enclosure, unbundling, data harvesting, and behavioral modification.” [source]
  17. Four frictions
  18. This has all happened before. Winner, L. (2009). Information Technology and Educational Amnesia. Policy Futures in Education, 7(6), 587-591. https://doi.org/10.2304/pfie.2009.7.6.587
  19. Ai destroys institutions [source]
  20. “educational technology has become such an intricate part of teaching that tech choices are really a matter of academic freedom” - A Consensus Approach to Ed Tech Adoption, Chronicle of Higher Education
  21. “Many free AI detectors have especially high false positive rates because they make money selling software that “humanizes” writing.” [source]
  22. “AI usage simply shifts the burden of completing tasks around the organization. The Journal cites a study from Workday which found that much of the time that employees reported saving by using AI tools was offset by extended reviews of AI-generated content. In many cases, of course, it’s executives who are passing down AI-generated work to subordinates, who must then review the work and correct it before it can be implemented.” [source]