Another view of peer review
Automating the lawyers
As I grow old, I jettison the unnecessary
Sleep is not just for humans and other living creatures
We have written about peer review at ACSH; here is a slightly different take from Marginal Revolution.
“Weak-link problems are problems where the overall quality depends on how good the worst stuff is. You fix weak-link problems by making the weakest links stronger, or by eliminating them entirely.
Food safety is a weak link problem…
[But] some problems are strong-link problems: overall quality depends on how good the best stuff is, and the bad stuff barely matters….Venture capital is a strong-link problem
….Here’s the crazy thing: most people treat science like it’s a weak-link problem. Peer reviewing publications and grant proposals, for example, is a massive weak-link intervention.”
While most AI apocalyptic scenarios involve rogue machines and killer droids and drones, just around the corner, if not already here, is a much more disturbing AI application, “AI-attorneys” filing claims.
“The hype cycle for chatbots—software that can generate convincing strings of words from a simple prompt—is in full swing. Few industries are more panicked than lawyers, who have been investing in tools to generate and process legal documents for years. After all, you might joke, what are lawyers but primitive human chatbots, generating convincing strings of words from simple prompts?
For America’s state and local courts, this joke is about to get a lot less funny, fast. Debt collection agencies are already flooding courts and ambushing ordinary people with thousands of low-quality, small-dollar cases. Courts are woefully unprepared for a future where anyone with a chatbot can become a high-volume filer….”
From Wired, Robot Lawyers Are About to Flood the Courts
As I get older, I find that my circle of friends shrinks but at the same time deepens. Most of the “business friends” are long gone, as are many that were peripheral, weak linkages in my perceived friends and family network. It turns out I am not alone as an individual or part of a species.
“We found that, similar to humans, aging female macaques focused their time and effort on family members and “friends” with whom they shared a particularly strong and stable bond.
While this narrowing of networks and focus on kith and kin does not necessarily result from macaques’ being aware they are nearing death – scientists aren’t sure if nonhuman animals have an awareness of their own mortality – it does suggest that there may be a shared evolutionary reason for social selectivity in humans and other primates.”
Why might our networks shrink? From The Conversation, Macaque monkeys shrink their social networks as they age – new research suggests evolutionary roots of a pattern seen in elderly people, too
Lately, I have noticed that when I learn something new, something old disappears from my memory, or at least, it becomes harder to retrieve. I don’t want to learn too much new; otherwise, I might forget my address. As it turns out, computers, the current metaphor for our view of the mind, can have the same problem.
“Artificial neural networks are prone to a troublesome glitch known, evocatively, as catastrophic forgetting. These seemingly tireless networks can keep learning tasks day and night. But sometimes, once a new task is learned, any recollection of an old task vanishes. It’s as if you learned to play tennis decently well, but after being taught to play water polo, you suddenly had no recollection of how to swing a racket.
… Perhaps the spiking neural networks he was working with simply needed a rest.”
Do your computers really need to sleep? Good question and one considered by Nautil.us in Even Machine Brains Need Sleep