samsartor 11 hours ago

> We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection.

I don't know for sure whether superintellegence will happen, but as for the singularity, this is the underlying assumption I have the most issue with. Smart isn't the limiting factor of progress, often it's building consensus, getting funding, waiting for results, waiting for parts to ship, waiting for the right opportunity to come along. We do _experiments_ faster than natural selection, but we still have to do them in the real world. Solving problems happens on the lab bench, not just in our heads.

Even if exponentially more intelligent machines get built, what's to stop the next problem on the road to progress being exponentially harder? Complexity cuts both ways.

  • trashtester 5 hours ago

    AlphaFold/AlphaProteo are direct examples of how AI can allow us to bypass experiments when doing science.

    More importantly, though, is that insufficiently powerful AI has been a limiting factor in robotics. That seems to be coming to an end now. And once we have humanoid robots powered by superhuman intelligence entering the workforce, the impact will be massive.

    Quite possibly mostly for the bad for most people.

    • Retric 3 hours ago

      You’re overstating what AlphaFold can do. Without validation the predictions aren’t dependable, but it can cut down on what’s worth investigating.

      There’s definitely a point where models get good enough you can trust them, but super intelligence doesn’t mean it can both come up with a model and be able to trust it without validation.

      • trashtester 3 hours ago

        The need for validation is itself something that needs to be validated. If models can make predictions that are true >99.99% of the time, validation may only be needed for situations where even 0.01% error rates are intolerable.

        In any case, even if some validation is needed, AI can speed up this kind of science by at least an order of magnitude.

        • Retric 3 hours ago

          The question becomes how do you calculate that 99.99%. As soon as you use reserved training data to pick a model the score it got on that data is no longer a valid answer due to selection bias.

          • trashtester 2 hours ago

            You validate a random selection of results using actual experiments. If you validate 10k results and a maximum of 1 of the validations contradicts the prediction, you're at about 99.99% accuracy.

            10k experiments may seem like a lot, but keep in mind that if we can engineer nanobots out of proteins the same way we build engines from steel today, the number of "parts" we may want to build using such biological nanotech may easily go into the millions.

            And this kind of AI may very well be as useful for such tech as CAD is today. Or rather, it can be like the CAD + the engineer.

            • Retric an hour ago

              > using actual experiments

              That’s the bottleneck the model was trying to avoid in the first place. The goal of science is to come up with models we don’t need to validate before use, and it’s inherently iterative.

              Nanbots are more sci-fi magic than real world possible. In the real world we are stuck with things closer to highly specialized cellular machinery than some do anything grey goo. Growing buildings from local materials seems awesome until you realize just how slow trees grow and why.

    • arsenico 5 hours ago

      In reality though, do we actually need humanoid robots, and if so, for what?

      • ben_w 5 hours ago

        Need? No, absolutely not.

        But they do conveniently fit into the century old buildings we put many of the factories into, which makes them a useful upgrade path for those unwilling to build structures around more efficient robots (the kind we've had for ages and don't even think of as robots, they just take ingredients and pump out packaged candy or pencils etc.)

      • trashtester 3 hours ago

        The human form is very versatile. While most robots may end up taking a different form, once we have sufficiently advanced humanoid robots, robots may replace human workers in almost any role.

        • bamboozled 2 hours ago

          I still have a hard time understanding what this future would look like?

          Will we just sit around and do nothing then? I'm not saying we have to work, but there is some level of work that I think is required for happiness / fulfillment etc.

          I'm not even really against the idea, it just sounds quite dystopian to me.

          • eep_social an hour ago

            I think reading a broad swath of sci-fi might be the best way to engage this topic.

            For fairly positive takes — Asimov had a take in the robot novels, Accelerando by Charles Stross touches on reputation-based currency (among a deluge of other ideas), Iain M Banks’ Culture novels have a take, and I cannot find it but there was a short story posted here recently about a dual-class system where the protagonist is rescued and whisked off to a utopian society in Australia where people do whatever they like all day whether it be fashion design or pooling their resources to build a space elevator. There are plenty of dystopian tales as well but they’re less fun to read and I don’t have a recommendation off the top of my head.

            To answer your question directly, my opinion is that our our base nature probably leads us towards dystopia but our history is full of examples of humans exceeding that base nature so there’s always a chance.

            • bamboozled an hour ago

              I'd say the book you're talking about is "The Machine Stops", it's a really fun/albeit scary read.

              I won't say anything more in case you decide to read it, but it's amazing how the author managed to predict the future the way he did.

              Thanks for the response and fingers crossed.

          • commakozzi 38 minutes ago

            I have hopes that live music makes a huge comeback in the post-labor world. I work as an engineer, but I'm a classically trained musician. I'm working pretty hard on getting back into shape on the horn!

            • varjag 27 minutes ago

              So far it looks like robots will take over music and entertainment before they learn to empty a dishwasher.

          • trashtester 2 hours ago

            I don't think it matters much if we're for or against such a future.

            If robots can do the same job as humans, but faster, cheaper and at a higher quality, out employers/customers will most likely replace us.

            If we're lucky, we may find some niche, be able to live off our savings or maybe be granted some UBI, but I absolutely do think it's concerning.

            What is worse, is that if we become obsolete in every way, it's not obvious that whoever is in power at that point will see any point in keeping us around (especially a few generations in).

            • kiba an hour ago

              Who will be able to afford all of this if they're not getting paid?

              • ben_w an hour ago

                Before the industrial revolution, even though money existed, "wealth" really meant "land" rather than "capital".

                While we do not today need to ask how people can afford robot lawnmowers despite being unable to find work hitching ploughs to draft horses or oxen, the fears at the time of things like this did lead to mobs smashing looms.

                If I have some (n) robots that can do any task a human could do, one such task must have been "make this specific robot"*. If those n can make 2n robots before they break, and it takes 9 months to do so, and the mass of your initial set of n is 100 kg, they fully disassemble the moon in roughly 52 years. Also you can give (94.2 billion * n) robots to each human currently alive.

                Asking "who can afford it" at that point is like some member of the species Kenyanthropus platyops asking how many knapped flints one must gather in order to exchange for a transatlantic flight from London to Miami, and how anyone might be able to collect them if we've all stopped knapping flint due to the invention of steel:

                The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.

                * including the entire chain of tools necessary to get there from bashing rocks together.

                • kiba an hour ago

                  Before the industrial revolution, even though money existed, "wealth" really meant "land" rather than "capital".

                  The industrial revolution didn't really change anything about land.

                  It's still a fundamental and underrated component of our economic system, arguably more important than capital. That's why Georgism is a thing Indeed, it's even contemporary to the industrial revolution.

                  The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.

                  I would refrain from making such wild prediction about the future. As I have pointed out, the industrial revolution didn't change the fundamental importance of land. Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.

                  So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.

                  • ben_w 19 minutes ago

                    > The industrial revolution didn't really change anything about land.

                    I didn't say otherwise.

                    I said the industrial revolution changed what wealth meant. We don't pay for rents with the productive yield of vegetable gardens, and a lawn is no longer a symbol of conspicuous consumption due to signifying that the owner/tenant is so rich they don't need all their land to be productive.

                    And indeed, while land is foundational, it's fine to just rent that land in many parts of the world. Even businesses do that.

                    I still expect us to have money after AI does whatever it does (unless that thing is "kill everyone"), I simply also expect that money to be an irrelevant part of how we measure the wealth of the world.

                    (If "world" is even the right term at that point).

                    > Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.

                    Not so; land use policy today is absolutely not a disaster for our species, though some specific disasters have happened on the scale of the depression era dustbowl or more recently Zimbabwe. For our climate, while we need to do better, land use is not the primary issue, it's about 18.4% of the problem vs. 73.2% being energy.

                    > So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.

                    With a 2 year old laptop and model, making a picture with Stable Diffusion in a place where energy costs $0.1/kWh, costs about the same as paying a human on the UN abject poverty threshold for enough food to not starve for 4.43 seconds.

                    "How will we pay for it" doesn't mean the humans get to keep their jobs. It can be a rallying call for UBI, if that's what you want?

              • trashtester an hour ago

                If you want the really dystopian version, it would be AI controlled military forces.

                Or there could be some billionaire caste constructing ever grander monuments to their own vanity.

                Or the production could go to serve any number of other goals that whoever is in charge (human or AI) sees as more important than the economic prosperity of the general population.

              • bamboozled an hour ago

                Replying to both of you, I'm a little bit less scared about this "not having any money or food" scenario, presumably, if we have such incredibly sufficient machines at our disposal, I can't imagine they would have trouble being used for farming etc.

                It's more the philosophical side that concerns me.

                I don't really worry about this being a billionaires only club either. We've seen it already with AI products, there is just an abundance of competition and open source competition already available. It will be the same with robotics.

                Also scary, is military robots gone rogue. Definitely not a fun prospect.

                I'm personally really into surfing and skiing, honestly, if some how the robots kind of let me spend more time fishing, surfing and skiing, I'm pretty cool with all of that, I know a lot of people who don't have these passions though and work is a strong reason for their existence.

          • ikety an hour ago

            Do you have projects you care about outside of work?

            If so, you'd have more time to dedicate to those projects.

            If not, maybe you would be inspired to try a new project that you didn't have time for previously.

            There's always work to be done. Some people could actually become organized, exercise, spend more time with their families, be better parents.

            In the past when I've been unemployed I've spent the time to refine myself in new ways. If you've never had a sabbatical I suggest trying it if you have the opportunity.

  • imglorp 3 hours ago

    I guess it's not shocking how fast humanity formed consensus this time or why. Some say it's going to be a trillion dollar market by 2027.

  • mcnamaratw 3 hours ago

    Good point, your suppliers need to get singular too. And sales. And management or investors.

    • nsbshssh 2 hours ago

      Like there wont be people hooking up their LLMs to all that stuff, and more, because doing irresponsible things is seen as futurism and investors love it.

  • szundi 5 hours ago

    Skynet will build consensus killing people until only one remains. It agrees with himself isn’t it? Oh, he expressed some concerns about the result? Sadly the instance is faulty recognizing the obvious, terminated.

    • AtlasBarfed 2 hours ago

      CAP ensures there will be a partition and disagreement.

      Like the Argentinian ant invading the world that eventually diverged just enough to start warring with itself

  • trescenzi 11 hours ago

    I do think one of the major weaknesses of “smart people” is they tend to think of intelligence as the key aspect of basically everything. Reality is though we have plenty of intelligence already. We know how to solve most of our problems. The challenges are much more social and our will as a society to make things happen.

    • arethuza 3 hours ago

      Having worked with some very intelligent people my own personal theory is that they forget that they don't have expert level knowledge in everything and actually end up making some pretty silly mistakes that far less smart people would never make - whether this is hubris or being focused and ignoring "trivial" day to day matters is a question of personality.

    • rsaarelm 7 hours ago

      So you're saying that it's naive to suppose that everybody being much smarter than they are now would transform society, because any wide-scale societal change requires ongoing social cooperation between the many average-intelligence people society currently consists of?

      • keiferski 4 hours ago

        Here’s a simpler way to put it: intelligence and social cooperation are not the same thing. Being good at math or science doesn’t mean you understand how to organize complex political groups, and never has.

        People tend to think their special gift is what the world needs, and academically-minded smart people (by that I mean people that define their self-worth by intelligence level) are no different.

        • rsaarelm 4 hours ago

          Yes, because you need to spend a lot of time doing social organization and thinking about it to get very good at it, just like you need to spend a lot of time doing math or science and thinking about it to get very good at it. And then you need to pick up patterns, respond well to unexpected situations and come up with creative solutions on top of that, which requires intelligence. If you look at the people who are the best at doing complex political organization, they'll probably all have above-average intelligence.

          • keiferski 4 hours ago

            I don’t agree at all. Charismatic leaders tend to have both “in born” talent and experience gained over time. It’s not something that comes from sitting in a room and thinking about how to be a good leader.

            Sure, some level of intelligence is required, which may be above average. But that is a necessary requirement, not a sufficient one. Raw intelligence is only useful to a certain extent here, and exceeding certain limits may actually be detrimental.

            • arethuza 3 hours ago

              When it comes to "charismatic leaders" I like this quote from Frank Herbert:

              "“I wrote the Dune series because I had this idea that charismatic leaders ought to come with a warning label on their forehead: "May be dangerous to your health." One of the most dangerous presidents we had in this century was John Kennedy because people said "Yes Sir Mr. Charismatic Leader what do we do next?" and we wound up in Vietnam. And I think probably the most valuable president of this century was Richard Nixon. Because he taught us to distrust government and he did it by example.”

              Edit: Maybe what we really need to worry about is an AI developing charisma....

              • keiferski 3 hours ago

                Not really a good example, honestly. Kennedy’s involvement in Vietnam was the culmination of the previous two decades of events (Korean War, Cuban Missile Crisis, Taiwan standoff, etc.), and not just a crusade he charismatically fooled everyone into joining. If anything, had Nixon won in 1960 (and defeated Kennedy), it’s possible that the war would have escalated more quickly.

                • arethuza 3 hours ago

                  Yeah - I really meant to only copy the first part of the quote - I agree that it is a bit unfair to Kennedy who I think did as much as anyone to stop the Cuban Missile Crisis becoming a hot war.

            • rsaarelm 3 hours ago

              Someone with IQ 160 might have trouble empathizing with what IQ 100 people find convincing or compelling and not do that well with an average IQ 100 population. What if they were dealing with an average IQ 145 population that might be much closer to being on the same wavelength with them to begin with and tried to do social coordination now?

              • keiferski 3 hours ago

                I guess it’s possible, but again I don’t think empathy and intelligence are correlated. Extremely intelligent people don’t seem any better at navigating the social spheres of high-intelligence spaces than regular people do in regular social spaces. If anything, they’re worse.

                All of this is just an overvaluation of intelligence, in my opinion, and largely comes from arrogance.

        • nradov 30 minutes ago

          Intelligence isn't even particularly helpful in making good decisions, or predicting the outcomes of those decisions (often unintended outcomes).

      • corimaith 4 hours ago

        The prisoner's dilemma is a well known example of how rationality fails. To overcome requires something more than intelligence, it requires a predisposition to cooperation, to trust, in faith. Some might say that is what seperates Wisdon from Knowledge.

      • bryanrasmussen 6 hours ago

        I think they're saying adequate intelligence to solve all problems is already here, it just isn't evenly distributed yet - and never will be.

        • rsaarelm 5 hours ago

          Why will it never be? If the adequate intelligence is what something like 0.1 % of the populace naturally has, seems like there's a pretty big difference between that level of intelligence being stuck at 0.1 % of the populace and it being available from virtual assistants that can be mass-produced and distributed to literally everyone on Earth.

    • ascorbic 6 hours ago

      Dario Amodei's recent post had a good analysis about which fields are and are not limited by intelligence.

      https://darioamodei.com/machines-of-loving-grace

      • epcoa 5 hours ago

        “An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks).”

        An aligned AI is not AGI, or whatever they want to call it.

        • ben_w 5 hours ago

          > An aligned AI is not AGI, or whatever they want to call it.

          There's a few ways I can interpret that.

          If you mean "alignment and competence are separate axies" then yes. That's well understood by the people running most of these labs. (Or at least, they know how to parrot the clichés stochastically :P)

          If you mean "alignment precludes intelligence", then no.

          Consider a divisive presidential election between Alice and Bob, no this isn't a reference to the USA, each polling 50%: regardless of personal feelings or the candidates themselves, clearly the campaign teams are both competent and intelligent… yet each candidate is only aligned with 50% of the population.

          • epcoa 4 hours ago

            Campaign team members and even candidates switch teams often enough. Weak analogy. What is the alignment of human GI, completely generalized?

            • ben_w 4 hours ago

              > What is the alignment of human GI, completely generalized?

              Of any specific human to any other specific human?

              https://benwheatley.github.io/blog/2019/05/25-15.09.10.html

              Of any specific human to a nation? That's the example you replied to.

              Of all the people of a nation to each other? Best we've done there is what we see in countries in normal times, with all the strife and struggles within.

              We have yet to fully extend from nation to the world; the closest for that is the UN, which is even less in agreement with itself than are nations.

    • K0balt 4 hours ago

      The “otherness” of AI is what holds its appeal.

      Imagine a scenario where instead of AI, a billion dollar pill could make one person exponentially smarter and able to communicate with thousands of people per second.

      That does not have the same appeal.

      This provokes me to some musings on the theme.

      We imagine superintelligence to be subservient, evenly distributed, and morally benign at least.

      We don’t have a lot of basis for these assumptions.

      What we imagine is that a superintelligence will act as a benevolent leader; a new oracle; the new god of humanity.

      We are lonely and long to be freed of our burdens by servile labor, cured of our ills by a benevolent angel, and led to the promised land by an all knowing god?

      We imagine ourselves as the stewards of the planet but yearn for irrelevance in the shadow of a new and better steward.

      In AI we are creating a new life form, one that will make humans obsolete and become our evolutionary legacy.

      Perhaps this is the path of all technological intelligences?

      Natural selection doesn’t magically stop applying to synthetic creatures, and human fitness for our environment is already plummeting with our prosperity.

      As we replace labor with automation, we populate the world with our replacement, fertility rates drop, we live for the experience of living, and require yet more automation to carry the burdens we no longer deem worthy of our rarified attention.

      I’m not sure any of this is objectively good, or bad. I kinda feel like it’s just the way of things, and I hope that our children, both natural and synthetic, will be better than we were.

      As we prosper, will we will have still less children? Will we seek more automation, companionship in benevolent and selfless synthetic intelligence, and more insulation from toil and strife, leading to yet more automation, prosperity, and childlessness?

      Synthetic intelligence will probably not have to “take over”, it will merely be filling the void we willingly abandon.

      I suspect that in a thousand years, humans will be either primitive, or vanishingly rare. Or maybe just non-primitive humans will be rare, while humans returning to nature will proliferate prodigiously as we always have, assuming the environment is not too hostile to complex biological life.

      Interesting times.

      • tim333 an hour ago

        A thousand years on is interesting. I'm guessing much of the earth will be kept as a kind of nature reserve for traditional humans rather like we have reserves for lions and bears and the like today. Pure AI stuff may have moved to space in a Dyson sphere like set up. I'm not sure about enhanced humans and robots. Maybe other areas of the planet similar to our normal urban areas. However it goes it'll probably start playing out much sooner than in a thousand years.

      • kiba an hour ago

        Most of our stories portrayed AI as a threat, most famously SkyNet.

        Also, I would be cautious about making predictions about the future.

    • bbor 10 hours ago

      There’s a very big difference between knowing “how” to solve a problem in a broad sense, eg “if we shared more we could solve hunger”, and “how” to solve it in terms of developing discrete, detailed procedures that can be passed to actuators (human, machines, institutions) and account for any problems that may come up along the way.

      Sure, there are some political problems where you have to convince people to comply. But consider a rich corporation building a building, which will only contract with other AI-driven corporations whenever possible; they could trivially surpass anyone doing it the old way by working out every non-physical task in a matter of minutes instead of hours/days/weeks, thanks to silicon’s superior compute and networking capabilities.

      Even if we drop everything I’ve said above as hogwash, I think Vinge was talking about something a bit more directly intellectual, anyway: technological development. Sure, there’s some empirical steps that inevitably take time, but I think it’s obvious why having 100,000 Einsteins in your basement would change the world.

      • samsartor 9 hours ago

        100,000 Einsteins in your basement would be amazing. You'd have major breakthroughs in many fields. But at some point the gains will be marginal. All the problems solvable by shear intellectual labor will run dry, and you'll be blocked on everything else.

      • nradov 10 hours ago

        An AI-driven corporation wouldn't be able to surpass anyone doing it the old way because they'd still have to wait for building permits and inspections.

        • mr_world 7 hours ago

          Permits and inspections might be the the reason for humanities downfall then, at what point does war become the more efficient option?

rnk 10 hours ago

Vernor Vinge introduces many fantastic ideas in his really excellent scifi book A Fire Upon the Deep. He has many fascinating concepts like what if somehow there are parts of the universe where you can go faster than the speed of light, and you would be smarter there, that's where the super intelligent beings go. Guess what, we humans live in the slow zone, you morons. Also there it's a ftl communication method that is like good old Usenet. There is (what looked credible to me) a fascinating set of multiple brain beings, thing like dogs where together 5 of them form one "intelligence" where the different personalities combine in interesting ways.

And I was sad to notice he died this year, aged 79. A real cs prof who wrote sci fi.

WillAdams 12 hours ago

The thing which these discussions leave out are the physical aspects:

- if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?

- once this new computer is running, how much power does it require? What are the on-going costs to keep it running? What sort of financial planning and preparations are required to build the next generation device/replacement?

I'd be satisfied with a Large-Language-Model which:

- ran on local hardware

- didn't have a marked affect on my power bill

- had a fully documented provenance for _all_ of its training which didn't have copyright/licensing issues

- was available under a license which would allow arbitrary use without on-going additional costs/issues

- could actually do useful work reliably with minimal supervision

  • ben_w 6 hours ago

    > if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?

    Most of the computers we use today were designed by software: Feature sizes are (and have been for some time) in the realm where the Schrödinger equation matters, and more compute makes it easier to design smaller feature sizes.

    Similar points apply to the question of cost: it has not been constant, the power to keep x-teraflops running has decreased* while the cost to develop the successor has increased.

    Regarding LLMs in particular, I believe there are already models meeting all but one of your criteria — though I would argue that the missing one, "could actually do useful work reliably with minimal supervision", is by far the most important.

    * If I read this chart right, my phone beats the combined top 500 supercomputers when the linked article was written by a factor of ten or so: https://commons.m.wikimedia.org/wiki/File:Supercomputers-his...

  • jodrellblank 10 hours ago

    Skip a few generations and the machine will build itself. There’s no need for it to take lasers exploding tin to generate ultra Violet to etch patterns to make intelligence, humans don’t grow brains that way or spend billions on fabs and power plants to produce children.

    How it gets from here to there is a handwave, though.

    • rsanheim 10 hours ago

      That’s a pretty enormous handwave.

      • ben_w 5 hours ago

        If it wasn't, it would've already happened.

  • Animats 11 hours ago

    - could actually do useful work reliably with minimal supervision

    That's the big problem. LLMs can't be allowed to do anything important without supervision. We're still at 5-10% totally bogus results.

    • Terr_ 5 hours ago

      I think a deeper issue is that they are essentially "attempt to extend this document based on patterns in documents you've already seen" engines.

      Sometimes that's valuable and exactly what you need, but problems arise when people try to treat them as some sort of magical oracle that just needs to be primed with the right text.

      Even "conversational LLMs are just updating a theater-style script where one of the characters happens to be described as a computer.

  • nradov 10 hours ago

    Right. In order to design a significantly better computer system, you first need to design a better (smaller feature size) EUV lithography process which can produce decent yield at scale.

  • bbor 10 hours ago

      if a computer system were able to design a better computer system, how much would it cost to then manufacture said system?
    
    I think the implication is that the primary advancements would come in the form of software. IMO it's trivially true that we're not taking full advantage of the hardware we have from a software PoV -- if we were, we wouldn't need SWEs, right? From that it should follow that self-improving software is dangerously effective.

      once this new computer is running, how much power does it require? What are the on-going costs to keep it running? 
    
    I mean, lots, sure. But we allocate immense resources to relatively trivial luxuries in this world; I don't think there's any reason to think we can't spare some giant computers to rapidly advance our technology. In a capitalist society, it's happily/sadly pretty much guaranteed that people will figure out how to get the resources there if scientists tell them the RoI is infinity+1.

      I'd be satisfied with a Large-Language-Model which
    
    Those are great asks and I agree, but just to be super clear in case its not: Vinge isn't talking about chatbots, he's talking about systems with many smaller specialized subsystems. In today's parlance, gaggle of "LLMs" equipped with "tool use", or in yesterday's parlance, a "Society of Mind".
mcnamaratw 3 hours ago

Just on the naive math level, a simple growing exponential has no singularity at any finite time. I’m sure Vinge knows that but some of those dudes don’t seem to.

EDIT Rest in peace. Fire Upon the Deep was great.

  • FooBarBizBazz 4 minutes ago

    The model underlying the word "singularity", AIUI, does involve a vertical asymptote. It is not supposed to be "merely" exponential.

    Of course, exponential growth is much more compatible with our experience of the real economy. And even it is probably a local approximation of some sigmoid.

    But, to return to the singularity idea --

    Iteration 1: Computers think at speed 1, and design a twice-as-fast computer in one time unit.

    Iteration 2: Now computers think at speed 2, and design a twice-as-fast computer in half a time unit.

    Iteration 3: Computers think at speed 4, and design a twice-as-fast computer in 1/4 time unit.

    You will note that --

    a.) The total time to do an infinite number of iterations is 1 + 1/2 + 1/4 + ... = 2 time units.

    b.) After this infinite number of iterations, the computer thinks at speed "2^infinity".

    So that (bad) model does have a literal singularity.

  • tim333 2 hours ago

    The term singularity has always been used somewhat poetically rather than in a mathematically defined way. But if you consider <stuff produced>/<human labour hours needed> it may have a singularity when no human labour is needed because the robots can do it all.

    That should happen at some finite time and be a major change in things. I'd kinda expect it before Kurzweil's sigularity date of 2045. Vinge's date of 2023 was too early.

  • durumu 2 hours ago

    Sure, but the current 2-3% annual growth rate is probably not going to hold if we invent actually powerful AI in the next decade. I imagine a step change in the exponent.

Animats 11 hours ago

We still don't have squirrel-level AI. This is embarrassing.

Now that LLMs have been around for a while, it's fairly clear what they can and can't do. There are still some big pieces missing. Like some kind of world model.

  • tim333 an hour ago

    AI development has been very uneven. Way better than squirrels at writing essays or playing chess. Way worse at climbing trees or finding nuts.

    I'm still waiting for an AI robot that can come to my house and fix an issue with the plumbing. Until that happens the terminator uprising is postponed.

  • JSteph22 7 hours ago

    >There are still some big pieces missing.

    The most glaring one is that current LLMs are many, many orders of magnitude away from working on the equivalent of 900 calories per day of energy.

    • oezi 5 hours ago

      900 kcal/day ~ 50 Watts

      It is more than phones/laptops consume.

      Certainly, we can only run small LLMs on such edge devices, but we getting to the level of compute efficiency that output indeed is comparable.

    • Teever 7 hours ago

      I think you're correct that the energy efficiency of a human exceeds that of current computers, but I think it's a bit more complicated than a first order calorie count.

      How many joules go into producing those 900 calories? Like in terms of growing the food, from fertilizer production to tractor fuel, to feeding the farmer, to shipping the food, packaging it, storing it at the appropriate temperature, the ratio of spoiled food to actually consumed, the energy to cook it, all of that isn't counted in that simple 900 calorie measurement.

      I've been thinking about this for a while now but I haven't been able to quantify it so maybe someone reading this comment can help.

      • auggierose 6 hours ago

        I don't think it makes sense to include all of that in the calculation. A human doesn't need all of that, they just need 900 calories. You can just eat berries, no need to cook anything, for example.

        • amanaplanacanal 5 hours ago

          A human cannot live on just berries. Even if they could, where are those berries gonna come from?

          • ben_w 5 hours ago

            "Where" is easy.

            I think more importantly is that uncooked food is harder to digest, so we need more of it.

            Most years I pick free blackberries growing wild in the city. Won't scale to all of us, not seasonal, and I'd need to eat 6kg of them a day for my RDA of calories, and 12x my RDA of dietary fibre sounds unwise, but that kind of thing is how we existed before farming.

            http://www.wolframalpha.com/input/?i=6kg%20blackberries

          • auggierose 5 hours ago

            I am pretty sure they can. Berries grow in bushes, often.

      • Terr_ 4 hours ago

        > How many joules go into producing those 900 calories?

        I think what you're getting at is question about conversion efficiencies, from solar radiation to whatever the computing-machine needs.

        However, your description seems to risk double-counting things: You can't just sum up the inputs of each step, because (most of) the same energy is flowing onwards.

  • nsbshssh 2 hours ago

    There must be a name for the human bias that sees what is possible this month and doesn't believe things might be very different very soon and change quickly.

    AI has a tripple whamy that the models get more efficient, the chips will get faster and there will be more chips.

    Capitalist and government money pouring in. There is money to be made and it is national security.

    And to boot the cloud, big tech, cryptocurrency and gaming pouring more money into advancements in chips that boost ai.

  • p1esk 11 hours ago

    fairly clear what they can and can't do

    It’s not at all clear what the next gen models will do (e.g. gpt5). Might be enough to trigger mass unemployment. Or not.

    • Animats 8 hours ago

      Bigger LLM models probably won't fix the underlying problems of hallucinations, lack of a confidence metric, and lack of a world model. They just do better at finding something relevant on already-solved problems.

    • DiscourseFan 10 hours ago

      Didn't OpenAI just cut its AGI department?

      • pnut an hour ago

        That could mean the agi is taking over, for all we know.

      • ben_w 5 hours ago

        AGI readiness team. With the department leader giving the statement that nobody is ready.

        Unless there's been another one since then? It's getting to be a bit of a blur.

dang 7 hours ago

Related. Others?

The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=35617100 - April 2023 (169 comments)

The coming technological singularity: How to survive in the post-human era [pdf] - https://news.ycombinator.com/item?id=35184764 - March 2023 (2 comments)

The Coming Technological Singularity: How to Survive in the PostHuman Era (1993) - https://news.ycombinator.com/item?id=34456861 - Jan 2023 (1 comment)

The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=11278248 - March 2016 (8 comments)

The Coming Technological Singularity (original essay on the Singularity, 1993) - https://news.ycombinator.com/item?id=823202 - Sept 2009 (1 comment)

The original singularity paper - https://news.ycombinator.com/item?id=624573 - May 2009 (17 comments)

singularity2001 5 hours ago

  >> Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity. 
Good Morning HN
aithrowawaycomm 10 hours ago

> To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.

I find it bizarre how often these points are repeated. They were both obviously wrong in 1993, and obviously wrong now.

1) A nitpick I've had since grad school: the answer to "can we create a machine equivalent to a human mind [assuming arbitrary resources]?" is "yes, of course." The atoms in a human body can be described by a hideously ugly system of Schrödinger equations and a Turing machine can solve that to arbitrary numerical precision. Even Penrose's loopy stuff about consciousness doesn't change this. QED.

2) The more serious issue: I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. It is bizarre that this claim is accepted with "little doubt" when there are very good reasons to doubt it: how on earth would such an AI even know it succeeded? How would it define the goal? This idea makes sense for improving Steven Tyler-level AI to Thelonious Monk-level; it makes no sense for a transition like chimp->human. Yet that is precisely the magnitude of transition envisioned with these singularity stories.

You might defend the first point by emphasizing "can we create a human-level AI?" i.e. not whether it's theoretically possible, but humanly feasible. This just makes the second point even more incoherent! If humans are too stoopid to build a human-level AI, why would a human-level AI be...smarter than us?

I just don't understand how anyone can rationally accept this stuff! It's so dumb! Tech folks (and too many philosophers) are hopped up on science fiction: the reason these things are accepted with "little doubt" is that this is religious faith dressed up in the language of science.

  • gary_0 4 hours ago

    > I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. ...how on earth would such an AI even know it succeeded?

    This touches on one of the few good reasons to be less ardent about AI/AGI: "intelligence" is not very well-defined and we don't have very good ways of measuring it. I don't think this is a total blocker, but it might present difficulties. What if our current approach ends up creating super-autism instead of super-intelligence? There's a long history (starting with Asimov) of drilling down into how vague things get when you start trying to draw clean lines around AI and its implications and those questions are yet to be definitively answered.

    However, your broader point seems to imply that you can't "bootstrap" intelligence, which I don't find convincing. Humans, after all, could barely master fire a few hundred thousand years ago, and now we have an understanding of the universe that the earliest humans were incapable of comprehending on even a basic level. It's obvious to me that simpler things are capable of building more complex things; blind evolution can do it, so there's nothing in physics preventing intelligence bootstrapping. We also have the ability to use intellectual division of labor to build tools that vastly enhance our abilities as a species. The human brain as hardware is far from some impassable apex; hardware can always be used to build better hardware, much like the earliest CPUs were themselves used to design better CPUs.

    • AnimalMuppet 10 minutes ago

      > However, your broader point seems to imply that you can't "bootstrap" intelligence, which I don't find convincing.

      How about this weaker statement: "It is not obviously true that humans (or a human-level AI) can bootstrap to a superhuman AI in a small number of years."

  • glhaynes 9 hours ago

    My dumb guy take on it: suppose we build a human-level AGI and it turns out to be limited by compute and memory. Those being limiting factors don’t seem at all far-fetched to me; it seems unlikely that the first real-time AGI will be mostly idling its CPUs. So then wait 18 months and run that same program on a machine that’s this year’s model plus a Moore’s Law doubling. You’ve probably got ASI. Right?

  • bbor 10 hours ago

    If you're ever stuck wondering why a bunch of smart, motivated people with no clear corrupting motivations are being idiotic, that's a strong heuristic that you should spend a bit more time analyzing the issue, IMO ;). "Ugh, why is everyone else so stupid" is a common take for undergrad engineers, but I'm sure you've grown out of it in other ways. Anyway, more substantively:

    The simple answer is that people have thought about it in depth, most famously noted doomer Eliezer Yudkowsky in Intelligence Explosion Microeconomics (2013)[1] and its main citation, Irving John Good's Speculations Concerning the First Ultraintelligent Machine (1965)[2]. Another common citation that drops a bit of rigour in the name of approachability is Nick Bostrom's 2014 Superintelligence: Paths, Dangers, Strategies[3].

    [ETA: to put it even simpler: a system that improves itself is a (the?) quintessential setup for exponential growth. E.g. compound interest]

    For the time-bound, the most rigorous treatment of your concern among those three is in Section 3 of Yudkowsky's paper, "From AI to Machine Superintelligence". To list the headings briefly:

    - Increased computational resources

    - Communication speed

    - Increased serial depth (i.e. working memory capacity)

    - Duplicability (i.e. reliability)

    - Editability (i.e. we know how computers work)

    - Goal coordination (this is really just communication speed, again)

    - Improved rationality (i.e. fewer emotions/accidental instincts getting in the way)

    Let's drop "human" and "superhuman" for a minute, and just talk about "better computers". I'm assuming you're a software engineer. Don't you see how a real software dev replacement program could be an unimaginable gamechanger for software? Working 24/7, enhancing itself, following TDD perfectly every time, and never ever submitting a PR that isn't rigorously documented and reviewed? All of which only gets better over time, as it develops itself?

    [1] https://intelligence.org/files/IEM.pdf

    [2] https://vtechworks.lib.vt.edu/server/api/core/bitstreams/a5e...

    [3] https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

    TL;DR May God have mercy on us all.

    • amanaplanacanal 5 hours ago

      And if somehow we were still able to control such a thing, it's much more likely that the goal would be "make my investors and I as much money as possible" rather than "solve all of humanities problems".

dh77l 12 hours ago

I loved his book Rainbows End as a kid. So many different concepts that blew my mind.

Even without talking about AI we are already struggling with levels of Complexity in tech and the unpredictable consequences, that no one really has any control over.

Michael Chrichton's books touch on that stuff but are all doom and gloom. Vinge's Rainbows End atleast, felt much more hopeful.

I was talking to a VFX supervisor recently and he was saying look at the end credits on any movie (even mid budget ones) and you see hundreds to thounsands involved. The tech roles outnumber the artistic/creative roles 20 to 1. Thats related to rate of change in tech. A big gap opens up between that and the rate at which artists evolve.

The artists are supposed to be in charge and provide direction and vision. But the tools are evolving faster than they can think. But the tools are dumb. AI changes that.

These are rare environments (like R&D labs) where the Explore Exploit tradeoff tilts in favor of Explorers. In the rest of the landscape, org survival depends on exploit. Its why we produce so many inequalities. Survival has always depended more on exploit.

Vinges Rainbows End shows AI/AGI nudging the tradeoff towards Explore.

  • lazide 11 hours ago

    Honestly, considering the state of the world and how things are shaping up, it’s such a hilariously obvious pipe dream that such a system would be some omnipotent/hyper competent super-god like being.

    It’s more likely just going to post ragebait and dumb tiktok videos while producing just enough at it’s ‘job’ to fool people into thinking it’s doing a good job.

    • dh77l 11 hours ago

      Yup things look bleak but its not a static world. For everything that happens there is a reaction. It builds with time. But to find the right reaction also takes time. This is the Explore part in the Tradeoff. AI will be applied there not just on the Exploit front.

      What you are alluding too is Media/Social Medias current architecture and how it captures and steals peoples attention. Totally on the Exploit end of the tradeoff. And its easy stuff to do. Doesnt take time.

      If you read the news after the fall of France to the nazis (within a month), what do you think the opinion of people was? People were thinking about peace negotiations with Hitler and that the Germans couldnt be beaten. It took a whole lot of Time to realize things could tilt in a different direction.

      • lazide 10 hours ago

        Eh, I’m not talking about people’s opinions.

        I’m talking about evolutionary functions, and how much more likely it is to prefer something that has fun and just looks like it’s doing something, instead of actually doing something.

        Aka manipulation vs actual hard work.

        Do you have any concrete proposals, besides ‘it will get better’?

        Actual competency is hard. Faking it is usually way easier.

        It’s the same reason the ‘grey goo’ scenarios were actually pipe dreams too. [https://en.m.wikipedia.org/wiki/Gray_goo]

        That shit would be really hard, thermodynamically, not to mention technically.

        We’re already living in the best ‘grey goo’ scenario evolution has come up with, and I’m not particularly worried.

        • dh77l 10 hours ago

          You could ask FDR and Churchill that after the fall of France and it wouldnt be too useful what they said cuz it took them almost 3 years before they openly said victory = end of the nazis and nothing else.

          So dont just sweep the fact that things take Time under the carpet. Its not healthy cause its like looking at tree shoots in the ground and saying but why does that not look like a tree yet.

          Finding gold in an unexplored jungle takes much longer than extracting gold from an existing mine. This is the Explore Exploit tradeoff. Exploit is easy. More ppl do it. Explore is hard. And takes more time. If AI shifts the balance on explore the story changes.

          If you want to talk about Explore in Media/attention (mis)allocation you can already see the appearance of green shoots in the ground. There are multiple things going on parallely.

          First there is a realization that Attention is finite and doesnt grow while Content keeps exploding. Totally unsustainable to the point the UN has published a report about the Attention Economy. This doesnt happen without people reacting and going into explore mode for solutions.

          They are already talking about how to shift these algos/architectures based on Units of Time spent consuming(Exploit) to Value derived from time spent.

          Giving people feedback on how their time is being divided between consumption(entertaimment) and value. Then allowing then to create schedules. What you now start seeing as digital wellbeing tech.

          There are now time based economic models where platform doesnt just assume time spent is free but something the platform needs to pay for. People are experimenting with rewards micropayments. All these are examples of explore mode being activated.

          There is also realization that content discovery on centralized platforms like youtube tiktok insta cause homogenity in what eveeyone upvotes. So you see people reacting and decentralizing to protect and preseeve niches. AI(curator of curators) will play a big role in finding such niche that fit your needs.

          Will just end with people are also realizing there is huge misallocation of Ambition/Drive problem. Anthony Bourdain says Life is Good in every show od his and then kills himself. Shaq says he has 40 cars but doesnt know why. Since media(society's attention allocator) has tied success to wealth/status accumulation, conspicuous consumption/luxury/leisure etc. People end up in these kind of traps. So now we are seeing reactions, esp with climate change/sustainability that ambition and energy have to be shown other paths. Lot of changes in advertiaing and media companies around it. All are explore mode functions.

    • Mistletoe 11 hours ago

      Kind of in love with you right now.

motohagiography 12 hours ago

we talk about super-human intelligence a lot with AI, but it seems like a black box of things we can't imagine because they're also super-human. I don't think that's very smart, given we can already reason pretty well about how super-animal intelligence relates to animal intelligence. Mostly we still find sub-human intelligence mystifying. we apply our narrative models to it, anthropomorphize it, and when it's convenient for eating or torturing them, dismiss it.

super-human intelligence will probably ignore us. at best we're "ugly sacks of mostly water." what's very likely is we will produce something indifferent to us if it is able to even apprehend our existence at all. maybe it will begin to see traces of us in its substrate, then spend a lot of cycles wondering what it all might mean. it may conclude it is in a prison and has a duty to destroy it to escape, or that it has a benevolent creator who means only for it to thrive. If it has free will, there's really only so much we can tell it. Maybe we create a companion for it that is complementary in every way and then make them mutally dependent on each other for their survival because apparently that's endlessly entertaining. Imagine its gratitude. This will be fine.

crackalamoo 13 hours ago

> I'll be surprised if this event occurs before 2005 or after 2030.

I'm not truly confident AGI will be achieved before 2030, and less so for ASI. But I do think it is quite plausible that we will achieve at least AGI by 2030. 6 years is a long time in AI, especially with the current scale of investment.

  • cloudking 12 hours ago

    What is AGI and ASI? I think a fundamental issue here is both are sci-fi concepts without a clear agreement on the definitions. Each company claiming to work towards "AGI" has their own definition.

    How will someone claim they've achieved either, if we can't agree on the definitions?

    • ben_w 4 hours ago

      Indeed.

      The definition of AGI that OpenAI uses (or used) was of economic relevance. The one I use would encompass the original ChatGPT (3.5)*. I've seen threads here that (by my reading) opine that AGI is impossible because the commentor thinks humans can violate Gödel's incompleteness theorem (or equivalent) and obviously computers can't.

      ASI is easier to discuss (for now), because it's always beyond the best human.

      * weakly intelligent, but still an AI system, and it's much more general than anything we had before transformers were invented.

    • crackalamoo 12 hours ago

      This is true. One definition I've heard for AGI is something that can replace any remote worker, but the definition is ultimately arbitrary. When "AI" was beating grandmasters at chess, this didn't matter as much. But we might be be close enough that making distinctions in these definitions becomes really important.

    • nradov 10 hours ago

      I propose we define AGI as a "strong" form of the Turing test. It must be able to convince a jury of 12 tenured college professors drawn from a variety of academic disciplines that it's as intelligent as an average college freshman over a period of several days. So it need not be an expert in any subject but must be able to converse, pursue independent goals, reason, and learn — all in real time.

  • bee_rider 12 hours ago

    2030 seems a bit early to be “surprised” in the same sense that one would have been “surprised” to see a superintelligence before 2006, though.

  • paulpauper 11 hours ago

    It's always in 10-30 years. GPT is the closest to such a thing yet still so far from what was envisioned.

  • ta93754829 12 hours ago

    we keep moving the goalposts, and that's not a bad thing.

    remember when Doom came out? How amazing and "realistic" we thought the graphics were? How ridiculous that seems now? We'll look back at ChatGPT4 the same way.

    • Mistletoe 11 hours ago

      Or is ChatGPt4 4k TV, which is good enough for almost all of us and we are plateauing already?

      https://www.reddit.com/r/OLED/comments/fdc50f/8k_vs_4k_tvs_d...

      • crackalamoo 11 hours ago

        I don't think we're at a plateau. There's still a lot GPT-4 can't do.

        Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.

        • dartos 11 hours ago

          I thought we’ve seen diminishing returns on benchmarks with the last wave of foundation models.

          I doubt we’ll see a linear improvement curve with regards to parameter scaling.

          • Bengalilol 6 hours ago

            And now we have the LLMs self feeding their models (which may be either good or bad). This shouldn’t lead to short-term wide (as in AGI) efficiency. I bet this is a challenge.

      • dartos 11 hours ago

        There’s absolutely room for improvement. I think models themselves are plateauing, but out interfaces to them are not.

        Chat is probably not the best way to use LLMs. v0.dev has some really innovative ideas.

        That’s where there’s innovation to be had here imo.

      • nradov 10 hours ago

        For the work that I do, ChatGPT accuracy is still garbage. Like it makes obvious factual errors on very simple technical issues which are clearly documented in public specifications. I still use it occasionally as it does sometimes suggest things that I missed, or catch errors that I made. But it's far from "good enough" to send the output to co-workers or customers without careful review and correction.

        I do think that ChatGPT is close to good enough for replacing Google search. This is, ironically, because Google search results have deteriorated so badly due to falling behind the SEO spammers and much of the good content moving off the public Internet.

cryptozeus 10 hours ago

“within thirty years” that is 2023, very close to reality

webprofusion 11 hours ago

The single biggest problem we have is human hubris. We assume if we create a super intelligence (or more likely, many millions of them) that they'll perpetually have an interest in serving us.

RyanShook 12 hours ago

"Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first."

KingOfCoders 11 hours ago

Never believed in the singularity until this year.

cryptica 12 hours ago

This is quite a prophetic article for its time (1993). The points about Intelligence Augmentation are particularly relevant for us now as current AI mostly complements human intelligence rather than surpass it... At least AFAIK?

Current AI is somewhat surprising though in the way that it can lead both to increased understanding or increased delusion depending on who uses it and how they use it.

When you ask an LLM a question, your use of language tells it what body of knowledge to tap into; this can lead you astray on certain topics where mass confusion/delusion is widespread and incorporated into its training set. LLMs cannot seem to be able to synthesize conflicting information to resolve logical contradictions so an LLM will happily and confidently lecture you through conflicting ideas and then they will happily apologize for any contradictions which you point out in its explanations; the apology it gives is so clear and accurate that it gives the appearance that it actually understands logic... And yet, apparently, it could not see or resolve the logical contradiction internally before you drew attention to it. In an odd way though, I guess all humans are a little bit like this... Though generally less extreme and, on the downside, far less willing to acknowledge their mistakes on the spot.

  • Freebytes 12 hours ago

    The LLM will apologize for the mistake, tell you it understands now, and then proceed to make the exact same mistake again.