Submitted by redditguyjustinp t3_zvvae8 in singularity
brain_overclocked t1_j1tptxo wrote
I suppose it wasn't going to be long before somebody would attempt to use AI in this manner, and most likely you're not the only one trying already. An interesting challenge to tackle for sure, there is certainly much to consider regarding an AI news agency not just in the technical side of things but especially in areas of bias, ethics, and perhaps a few other things we may not have considered yet!
Bias will certainly be an interesting challenge for sure -- as some of the other commentors have already brought up -- it's definitely a hard problem, and there exists the possibility that it's something that may never be entirely eliminated. But understanding bias in AI and training data, and how to identify it and reduce it is still a very active area of research, just as it still is in journalism.
Although, with transparency of training data, public evaluation of technique, adherence to journalistic code of ethics, and a framework for accountability, it may certainly be an attainable goal to produce an AI model capable of providing news in a trustworthy manner.
If you're serious about the endeavor, then perhaps you may want to ruminate some of these questions:
- Can you formally explain how you define and identify political bias, and how your AI model is able to minimize it?
- Can you do the same for loaded language?
- How do you prep your model's training data?
- In journalism there exists a bias termed 'false balance' where viewpoints, often opposing in nature, are presented as being more balanced than the evidence supports (i.e. climate change consensus v denialism). How does your model handle or present opposing viewpoints with regards to the evidence? Is your model susceptible to false balance?
- How do you define what a 'well-researched' story looks like? How would your model present that to the user?
- A key problem in science communication is balancing the details of a scientific concept or discovery and the comprehension of the general audience: if a topic is presented in a too detailed or formal manner then you risk losing either the interest of the audience or their ability to follow the topic, or both. If too informally presented then you risk miscommunicating the topic and possibly perpetuating misunderstanding (how much context is too much context? At what point does it confuse rather than provide clarity?), and this balancing problem is true for just about every topic. How does your model present complicated ideas? How does it balance context?
- Why should people trust your AI news model?
- One way for the reader to minimize bias is by reading multiple articles or sources on the same topic, preferably ones with a strong history of factual reporting, and compare common elements between them. To help facilitate this there exist sites like AllSides that present several articles on the same topic form a variety of biased and least biased news agencies, or Media Bias/Fact Check that have a list of news sites that have a strong history of high factual reporting with least bias. Given that you intended to build your model as 'the single most reliable source of news', then how do plan to guarantee that reliability?
- How do you plan to financially support your model?
- Given that clickbait, infotainment, rage, and fear are easier to sell, then how can people trust you won't tweak your model for profitability?
Having taken a peek at your FutureNewsAI, it seems it's still a bit ways from what your stated goal is. I would hazard it's more for entertainment than anything serious yet.
But I wish you best of luck with the endeavor.
Viewing a single comment thread. View all comments