I recently came across an interesting news story. In early March, New York State proposed a bill that aims to ban AI from answering questions in professional fields such as medicine, law, dentistry, nursing, psychology, and engineering. On the surface, it seems to be about protecting public safety, but a closer look at the underlying logic reveals some intriguing motives.



Doctors, lawyers, engineers—these high-paying industries are actually well aware of what AI can do. Rather than protecting users, it seems more like they’re protecting their own businesses. Because if ordinary people can quickly access basic legal knowledge, medical common sense, or even receive preliminary professional advice through AI, the knowledge monopoly of these industries would be broken. Naturally, their revenue models would also be impacted.

This regulatory move by New York State is essentially a way of backing these industries. Using legal measures to restrict the scope of AI application appears to be about regulating technology, but in reality, it’s about maintaining the existing power structures. Such practices are not uncommon in regulatory efforts elsewhere—when new technology threatens traditional industry models, industries often lobby policymakers to introduce restrictive legislation.

Interestingly, this also reflects a common issue facing AI development today: technological progress often challenges existing professional barriers. Whether this bill in New York will actually be implemented remains uncertain, but from a regulatory perspective, attempts to block technological advancement through legal means are unlikely to succeed in the long run.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin