Category: Haunted places in elizabethtown ky

Автор: Faelkree

Matched betting forum msec

matched betting forum msec

I'm seeing bets matched sometimes at "better" i.e. lower prices for That usually happens during in play, when odds change fast and within milliseconds. If a wrong score is displayed, we reserve the right to void betting for this timeframe. In case of abandoned or postponed matches, all markets are. Set which forums support betting topics. Member notifications for when your bet is matched or you win a bet. Interval in milliseconds. SPORTS BETTING NRL TIPPING

Thus, the level of reliability is judgmentally quite low. Please ensure you read the Precautionary Actions at the end. No single answer to our questions, but at least responded twice. You will receive a reply from one of our representatives within the next 72 hours. And as said early in this report, Samvo bet broker shut down in late Bet Broker Cautions Please bear in mind that unlike giant bookmakers we usually deal with, this is the business we have to pay careful attention to risk management.

Many people ask if there is any UK Licensed bet broker though, No, there are no such bet brokers. Best regards, Yana Dear Yana, You are obliged to take proper KYC process irrespective of the payment method, especially you manage gambling business which requires strict age limitation in all countries. We will publish it accordingly. However such unprofessional business practice implies a sign of possible other misconduct. Once it happens, there is very little chance to recover your money. Let me continue the Anonymity issue.

Does it sound really attractive? All this means that your money will be stored in the bet broker rather than the established big bookmaker. You are essentially giving a third party the ability to access your money and place bets on your behalf.

Or simple poor management will lead to the collapse of the business easily. Could Be Your Enemy Sportmarket indicates on its site that; Unlike other bet brokers Sportmarket never lay any part of the bets, we only act as an intermediate between the players and the bookmakers. Eliezer: A typical neuron firing as fast as possible can do maybe spikes per second, a few rare neuron types used by eg bats to echolocate can do spikes per second, and the vast majority of neurons are not firing that fast at any given time.

The usual and proverbial rule in neuroscience - the sort of academically respectable belief I'd expect you to respect even more than I do - is called "the step rule", that any task a human brain or mammalian brain can do on perceptual timescales, must be doable with no more than serial steps of computation - no more than things that get computed one after another.

Or even less if the computation is running off spiking frequencies instead of individual spikes. If you actually look at what the retina is doing, and how it's computing that, it doesn't look like it's doing one floating-point operation per activation spike per synapse. This is harder to visualize and get a grasp on than the parallel-serial difference, but that doesn't make it unimportant.

Which brings me to the second line of very obvious-seeming reasoning that converges upon the same conclusion - that it is in principle possible to build an AGI much more computationally efficient than a human brain - namely that biology is simply not that efficient, and especially when it comes to huge complicated things that it has started doing relatively recently.

Brains have to pump thousands of ions in and out of each stretch of axon and dendrite, in order to restore their ability to fire another fast neural spike. The result is that the brain's computation is something like half a million times less efficient than the thermodynamic limit for its temperature - so around two millionths as efficient as ATP synthase. And neurons are a hell of a lot older than the biological software for general intelligence! Humbali: Ah! But allow me to offer a consideration here that, I would wager, you've never thought of before yourself - namely - what if you're wrong?

Ah, not so confident now, are you? Eliezer: One observes, over one's cognitive life as a human, which sorts of what-ifs are useful to contemplate, and where it is wiser to spend one's limited resources planning against the alternative that one might be wrong; and I have oft observed that lots of people don't They'll be like, " Well, what if UFOs are aliens, and the aliens are partially hiding from us but not perfectly hiding from us, because they'll seem higher-status if they make themselves observable but never directly interact with us?

I am not sure how I can get people to reject these ideas for themselves, instead of them passively waiting for me to come around with a specific counterargument. My having to counterargue things specifically now seems like a road that never seems to end, and I am not as young as I once was, nor am I encouraged by how much progress I seem to be making.

I refute one wacky idea with a specific counterargument, and somebody else comes along and presents a new wacky idea on almost exactly the same theme. I know it's probably not going to work, if I try to say things like this, but I'll try to say them anyways. When you are going around saying 'what-if', there is a very great difference between your map of reality, and the territory of reality, which is extremely narrow and stable.

Drop your phone, gravity pulls the phone downward, it falls. What if there are aliens and they make the phone rise into the air instead, maybe because they'll be especially amused at violating the rule after you just tried to use it as an example of where you could be confident? Imagine the aliens watching you, imagine their amusement, contemplate how fragile human thinking is and how little you can ever be assured of anything and ought not to be too confident.

Then drop the phone and watch it fall. You've now learned something about how reality itself isn't made of what-ifs and reminding oneself to be humble; reality runs on rails stronger than your mind does. Contemplating this doesn't mean you know the rails, of course, which is why it's so much harder to predict the Future than the past. But if you see that your thoughts are still wildly flailing around what-ifs, it means that they've failed to gel, in some sense, they are not yet bound to reality, because reality has no binding receptors for what-iffery.

The correct thing to do is not to act on your what-ifs that you can't figure out how to refute, but to go on looking for a model which makes narrower predictions than that. If that search fails, forge a model which puts some more numerical distribution on your highly entropic uncertainty, instead of diverting into specific what-ifs. And in the latter case, understand that this probability distribution reflects your ignorance and subjective state of mind, rather than your knowledge of an objective frequency; so that somebody else is allowed to be less ignorant without you shouting "Too confident!

Reality runs on rails as strong as math; sometimes other people will achieve, before you do, the feat of having their own thoughts run through more concentrated rivers of probability, in some domain. Now, when we are trying to concentrate our thoughts into deeper, narrower rivers that run closer to reality's rails, there is of course the legendary hazard of concentrating our thoughts into the wrong narrow channels that exclude reality. And the great legendary sign of this condition, of course, is the counterexample from Reality that falsifies our model!

But you should not in general criticize somebody for trying to concentrate their probability into narrower rivers than yours, for this is the appearance of the great general project of trying to get to grips with Reality, that runs on true rails that are narrower still.

If you have concentrated your probability into different narrow channels than somebody else's, then, of course, you have a more interesting dispute; and you should engage in that legendary activity of trying to find some accessible experimental test on which your nonoverlapping models make different predictions.

Humbali: I do not understand the import of all this vaguely mystical talk. Eliezer: I'm trying to explain why, when I say that I'm very confident it's possible to build a human-equivalent mind using less computing power than biology has managed to use effectively, and you say, "How can you be so confident, what if you are wrong," it is not unreasonable for me to reply, "Well, kid, this doesn't seem like one of those places where it's particularly important to worry about far-flung ways I could be wrong.

Less-learned minds will have minds full of what-ifs they can't refute in more places than more-learned minds; and even if you cannot see how to refute all your what-ifs yourself, it is possible that a more-learned mind knows why they are improbable. For one must distinguish possibility from probability. But if you've spent enough time noticing where Reality usually exercises its sovereign right to yell "Gotcha!

Reality is going to confound you in some other way than that. I mean, maybe you haven't read enough neuroscience and evolutionary biology that you can see from your own knowledge that the proposition sounds massively implausible and ridiculous. But it should hardly seem unlikely that somebody else, more learned in biology, might be justified in having more confidence than you. Phones don't fall up. Reality really is very stable and orderly in a lot of ways, even in places where you yourself are ignorant of that order.

But if "What if aliens are making themselves visible in flying saucers because they want high status and they'll have higher status if they're occasionally observable but never deign to talk with us? It may require a kind of life experience that I don't know how to give people, at all, let alone by having them passively read paragraphs of text that I write; a learned, perceptual sense of which what-ifs have any force behind them.

I mean, I can refute that specific scenario, I can put that learned sense into words; but I'm not sure that does me any good unless you learn how to refute it yourself. Humbali: Can we leave aside all that meta stuff and get back to the object level? Eliezer: This indeed is often wise.

Humbali: Then here's one way that the minimum computational requirements for general intelligence could be higher than Moravec's argument for the human brain. Since, after, all, we only have one existence proof that general intelligence is possible at all, namely the human brain. Perhaps there's no way to get general intelligence in a computer except by simulating the brain neurotransmitter-by-neurotransmitter.

In that case you'd need a lot more computing operations per second than you'd get by calculating the number of potential spikes flowing around the brain! What if it's true? How can you know? Modern person: This seems like an obvious straw argument? Eliezer: I can imagine that if we were trying specifically to upload a human that there'd be no easy and simple and obvious way to run the resulting simulation and get a good answer, without simulating neurotransmitter flows in extra detail.

To imagine that every one of these simulated flows is being usefully used in general intelligence and there is no way to simplify the mind design to use fewer computations I suppose I could try to refute that specifically, but it seems to me that this is a road which has no end unless I can convey the generator of my refutations.

Your what-iffery is flung far enough that, if I cannot leave even that much rejection as an exercise for the reader to do on their own without my holding their hand, the reader has little enough hope of following the rest; let them depart now, in indignation shared with you, and save themselves further outrage.

I mean, it will obviously be less obvious to the reader because they will know less than I do about this exact domain, it will justly take more work for the reader to specifically refute you than it takes me to refute you. But I think the reader needs to be able to do that at all, in this example, to follow the more difficult arguments later. Imaginary Moravec: I don't think it changes my conclusions by an order of magnitude, but some people would worry that, for example, changes of protein expression inside a neuron in order to implement changes of long-term potentiation, are also important to intelligence, and could be a big deal in the brain's real, effectively-used computational costs.

I'm curious if you'd dismiss that as well, the same way you dismiss the probability that you'd have to simulate every neurotransmitter molecule? Eliezer: Oh, of course not. Long-term potentiation suddenly turning out to be a big deal you overlooked, compared to the depolarization impulses spiking around, is very much the sort of thing where Reality sometimes jumps out and yells "Gotcha!

Humbali: How can you tell the difference? Eliezer: Experience with Reality yelling "Gotcha! Humbali: They seem like equally plausible speculations to me! Eliezer: Really? Humbali: Yes! They're both what-ifs we can't know are false and shouldn't be overconfident about denying! Eliezer: My tiny feeble mortal mind is far away from reality and only bound to it by the loosest of correlating interactions, but I'm not that unbound from reality.

Moravec: I would guess that in real life, long-term potentiation is sufficiently slow and local that what goes on inside the cell body of a neuron over minutes or hours is not as big of a computational deal as thousands of times that many spikes flashing around the brain in milliseconds or seconds. That's why I didn't make a big deal of it in my own estimate.

Eliezer: Sure. But it is much more the sort of thing where you wake up to a reality-authored science headline saying "Gotcha! There were tiny DNA-activation interactions going on in there at high speed, and they were actually pretty expensive and important!

The brain is as computationally efficient of a generally intelligent engine as any algorithm can be! I mean, I am a competent research roboticist and it is difficult to become one if you are completely unglued from reality. Moravec: Because while it's the kind of Fermi estimate that can be off by an order of magnitude in practice, it doesn't really seem like it should be, I don't know, off by three orders of magnitude?

And even three orders of magnitude is just 10 years of Moore's Law. Eliezer: And the year for strong AI even more so. Moravec: Heh! That's not usually the direction in which people argue with me. Eliezer: There's an important distinction between the direction in which people usually argue with you, and the direction from which Reality is allowed to yell "Gotcha!

I mean, in principle what I was arguing for was various lower bounds on performance, but I sure could have emphasized more loudly that those were lower bounds - well, I did emphasize the lower-bound part, but - from the way I felt when AlphaGo and Alpha Zero and GPT-2 and GPT-3 showed up, I think I must've sorta forgot that myself.

Moravec: Anyways, if we say that I might be up to three orders of magnitude off and phrase it as , do you agree with my prediction then? Eliezer: No, I think you're just On my view, creating AGI is strongly dependent on how much knowledge you have about how to do it, in a way which almost entirely obviates the relevance of arguments from human biology?

Like, human biology tells us a single not-very-useful data point about how much computing power evolutionary biology needs in order to build a general intelligence, using very alien methods to our own. Then, very separately, there's the constantly changing level of how much cognitive science, neuroscience, and computer science our own civilization knows.

We don't know how much computing power is required for AGI for any level on that constantly changing graph, and biology doesn't tell us. All we know is that the hardware requirements for AGI must be dropping by the year, because the knowledge of how to create AI is something that only increases over time. At some point the moving lines for "decreasing hardware required" and "increasing hardware available" will cross over, which lets us predict that AGI gets built at some point.

But we don't know how to graph two key functions needed to predict that date. You would seem to be committing the classic fallacy of searching for your keys under the streetlight where the visibility is better. You know how to estimate how many floating-point operations per second the retina could effectively be using, but this is not the number you need to predict the outcome you want to predict.

You need a graph of human knowledge of computer science over time, and then a graph of how much computer science requires how much hardware to build AI, and neither of these graphs are available. It doesn't matter how many chapters your book spends considering the continuation of Moore's Law or computation in the retina, and I'm sorry if it seems rude of me in some sense to just dismiss the relevance of all the hard work you put into arguing it.

But you're arguing the wrong facts to get to the conclusion, so all your hard work is for naught. Humbali: Now it seems to me that I must chide you for being too dismissive of Moravec's argument. Fine, yes, Moravec has not established with logical certainty that strong AI must arrive at the point where top supercomputers match the human brain's 10 trillion operations per second. But has he not established a reference class, the sort of base rate that good and virtuous superforecasters, unlike yourself, go looking for when they want to anchor their estimate about some future outcome?

Eliezer: With ranges that wide, it'd be more likely and less amusing to hit somewhere inside it by coincidence. But I still think this whole line of thoughts is just off-base, and that you, Humbali, have not truly grasped the concept of a virtuous superforecaster or how they go looking for reference classes and base rates. Humbali: I frankly think you're just being unvirtuous. Maybe you have some special model of AGI which claims that it'll arrive in a different year or be arrived at by some very different pathway.

But is not Moravec's estimate a sort of base rate which, to the extent you are properly and virtuously uncertain of your own models, you ought to regress in your own probability distributions over AI timelines? As you become more uncertain about the exact amounts of knowledge required and what knowledge we'll have when, shouldn't you have an uncertain distribution about AGI arrival times that centers around Moravec's base-rate prediction of ? For you to reject this anchor seems to reveal a grave lack of humility, since you must be very certain of whatever alternate estimation methods you are using in order to throw away this base-rate entirely.

Eliezer: Like I said, I think you've just failed to grasp the true way of a virtuous superforecaster. Thinking a lot about Moravec's so-called 'base rate' is just making you, in some sense, stupider; you need to cast your thoughts loose from there and try to navigate a wilder and less tamed space of possibilities, until they begin to gel and coalesce into narrower streams of probability. Which, for AGI, they probably won't do until we're quite close to AGI, and start to guess correctly how AGI will get built; for it is easier to predict an eventual global pandemic than to say it will start in November of Even in October of this cannot be done.

Humbali: Then all this uncertainty must somehow be quantified, if you are to be a virtuous Bayesian; and again, for lack of anything better, the resulting distribution should center on Moravec's base-rate estimate of Eliezer: No, that calculation is just basically not relevant here; and thinking about it is making you stupider, as your mind flails in the trackless wilderness grasping onto unanchored air. Things must be 'sufficiently similar' to each other, in some sense, for us to get a base rate on one thing by looking at another thing.

Humans making an AGI is just too dissimilar to evolutionary biology making a human brain for us to anchor 'how much computing power at the time it happens' from one to the other. It's not the droid we're looking for; and your attempt to build an inescapable epistemological trap about virtuously calling that a 'base rate' is not the Way.

Imaginary Moravec: If I can step back in here, I don't think my calculation is zero evidence? What we know from evolutionary biology is that a blind alien god with zero foresight accidentally mutated a chimp brain into a general intelligence. I don't want to knock biology's work too much, there's some impressive stuff in the retina, and the retina is just the part of the brain which is in some sense easiest to understand.

Eliezer: If that was true, the same theory predicts that our current supercomputers should be doing a better job of matching the agility and vision of spiders. When at some point there's enough hardware that we figure out how to put it together into AGI, we could be doing it with less hardware than a human; we could be doing it with more; and we can't even say that these two possibilities are around equally probable such that our probability distribution should have its median around Your number is so bad and obtained by such bad means that we should just throw it out of our thinking and start over.

Humbali: This last line of reasoning seems to me to be particularly ludicrous, like you're just throwing away the only base rate we have in favor of a confident assertion of our somehow being more uncertain than that. Eliezer: Yeah, well, sorry to put it bluntly, Humbali, but you have not yet figured out how to turn your own computing power into intelligence.

I dislike people who screw up something themselves, and then argue like nobody else could possibly be more competent than they were. I dislike even more people who change their mind about something when they turn 22, and then, for the rest of their lives, go around acting like they are now Very Mature Serious Adults who believe the thing that a Very Mature Serious Adult believes, so if you disagree with them about that thing they started believing at age 22, you must just need to wait to grow out of your extended childhood.

Luke Muehlhauser still being paraphrased : It seems like it ought to be acknowledged somehow. Eliezer: That's fair, yeah, I can see how someone might think it was relevant. I just dislike how it potentially creates the appearance of trying to slyly sneak in an Argument From Reckless Youth that I regard as not only invalid but also incredibly distasteful. You don't get to screw up yourself and then use that as an argument about how nobody else can do better.

Humbali: Uh, what's the actual drama being subtweeted here? Eliezer: A certain teenaged futurist, who, for example, said in , "The most realistic estimate for a seed AI transcendence is ; nanowar, before Why, he's not even putting a probability distribution over his mad soothsaying - how blatantly absurd can a person get?

Eliezer: Dear child ignorant of history, your complaint is far too anachronistic. This is we're talking about here; almost nobody is putting probability distributions on things, that element of your later subculture has not yet been introduced. Eliezer hasn't put his draft online for "Cognitive biases potentially affecting judgment of global risks". The Sequences won't start until another year after that. How would the forerunners of effective altruism in know about putting probability distributions on forecasts?

I haven't told them to do that yet! We can give historical personages credit when they seem to somehow end up doing better than their surroundings would suggest; it is unreasonable to hold them to modern standards, or expect them to have finished refining those modern standards by the age of nineteen.

Though there's also a more subtle lesson you could learn, about how this young man turned out to still have a promising future ahead of him; which he retained at least in part by having a deliberate contempt for pretended dignity, allowing him to be plainly and simply wrong in a way that he noticed, without his having twisted himself up to avoid a prospect of embarrassment. Instead of, for example, his evading such plain falsification by having dignifiedly wide Very Serious probability distributions centered on the same medians produced by the same basically bad thought processes.

But that was too much of a digression, when I tried to write it up; maybe later I'll post something separately. Once we've got, in your terms, human-equivalent AIs, even if we don't go beyond that in terms of intelligence, Moore's Law will start speeding them up. Once AIs are thinking thousands of times faster than we are, wouldn't that tend to break down the graph of Moore's Law with respect to the objective wall-clock time of the Earth going around the Sun?

Because AIs would be able to spend thousands of subjective years working on new computing technology? Actual Eliezer out loud : Thank you for answering my question. Actual Eliezer internally : Moore's Law is a phenomenon produced by human cognition and the fact that human civilization runs off human cognition. You can't expect the surface phenomenon to continue unchanged after the deep causal phenomenon underlying it starts changing.

What kind of bizarre worship of graphs would lead somebody to think that the graphs were the primary phenomenon and would continue steady and unchanged when the forces underlying them changed massively? I was hoping he'd be less nutty in person than in the book, but oh well. Eliezer, sighing: Another day, another biology-inspired timelines forecast.

This trick didn't work when Moravec tried it, it's not going to work while Ray Kurzweil is trying it, and it's not going to work when you try it either. It also didn't work when a certain teenager tried it, but please entirely ignore that part; you're at least allowed to do better than him. Imaginary Somebody: Moravec's prediction failed because he assumed that you could just magically take something with around as much hardware as the human brain and, poof, it would start being around that intelligent - Eliezer: Yes, that is one way of viewing an invalidity in that argument.

Though you do Moravec a disservice if you imagine that he could only argue "It will magically emerge", and could not give the more plausible-sounding argument "Human engineers are not that incompetent compared to biology, and will probably figure it out without more than one or two orders of magnitude of extra overhead.

Eliezer: And yet, because your reasoning contains the word "biological", it is just as invalid and unhelpful as Moravec's original prediction. Somebody: I don't see why you dismiss my biological argument about timelines on the basis of Moravec having been wrong. He made one basic mistake - neglecting to take into effect the cost to generate intelligence, not just to run it.

I have corrected this mistake, and now my own effort to do biologically inspired timeline forecasting should work fine, and must be evaluated on its own merits, de novo. Eliezer: It is true indeed that sometimes a line of inference is doing just one thing wrong, and works fine after being corrected. And because this is true, it is often indeed wise to reevaluate new arguments on their own merits, if that is how they present themselves.

One may not take the past failure of a different argument or three, and try to hang it onto the new argument like an inescapable iron ball chained to its leg. It might be the cause for defeasible skepticism, but not invincible skepticism. That said, on my view, you are making a nearly identical mistake as Moravec, and so his failure remains relevant to the question of whether you are engaging in a kind of thought that binds well to Reality. Somebody: And that mistake is just mentioning the word "biology"?

Eliezer: The problem is that the resource gets consumed differently, so base-rate arguments from resource consumption end up utterly unhelpful in real life. The human brain consumes around 20 watts of power. Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we'll get AGI?

Somebody: That's absurd, of course. So, what, you compare my argument to an absurd argument, and from this dismiss it? Eliezer: I'm saying that Moravec's "argument from comparable resource consumption" must be in general invalid , because it Proves Too Much. If it's in general valid to reason about comparable resource consumption, then it should be equally valid to reason from energy consumed as from computation consumed, and pick energy consumption instead to call the basis of your median estimate.

You say that AIs consume energy in a very different way from brains? Well, they'll also consume computations in a very different way from brains! The only difference between these two cases is that you know something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information.

Since you know anything whatsoever about how AGIs and humans consume energy, you can see that the consumption is so vastly different as to obviate all comparisons entirely. You are ignorant of how the brain consumes computation, you are ignorant of how the first AGIs built would consume computation, but "an unknown key does not open an unknown lock" and these two ignorant distributions should not assert much internal correlation between them.

Even without knowing the specifics of how brains and future AGIs consume computing operations, you ought to be able to reason abstractly about a directional update that you would make, if you knew any specifics instead of none. If you did know how both kinds of entity consumed computations, if you knew about specific machinery for human brains, and specific machinery for AGIs, you'd then be able to see the enormous vast specific differences between them, and go, "Wow, what a futile resource-consumption comparison to try to use for forecasting.

I think it's probably too abstract for most people to feel in their gut, or something like that, so their brain ignores it and moves on in the end. I have had life experience with learning more about a thing, updating, and then going to myself, "Wow, I should've been able to predict in retrospect that learning almost any specific fact would move my opinions in that same direction.

Somebody: All of that seems irrelevant to my novel and different argument. I am not foolishly estimating the resources consumed by a single brain; I'm estimating the resources consumed by evolutionary biology to invent brains! Eliezer: And the humans wracking their own brains and inventing new AI program architectures and deploying those AI program architectures to themselves learn, will consume computations so utterly differently from evolution that there is no point comparing those consumptions of resources.

That is the flaw that you share exactly with Moravec, and that is why I say the same of both of you, "This is a kind of thinking that fails to bind upon reality, it doesn't work in real life. It's just not a relevant fact. At least Somebody is only wrong on the object level, and isn't trying to build an inescapable epistemological trap by which his ideas must still hang in the air like an eternal stench even after they've been counterargued.

Isn't 'but muh base rates' what your viewpoint would've also said about Moravec's estimate, back when that number still looked plausible? Humbali: Of course it is evident to me now that my youthful enthusiasm was mistaken; obviously I tried to estimate the wrong figure.

As Somebody argues, we should have been estimating the biological computations used to design human intelligence, not the computations used to run it. I see, now, that I was using the wrong figure as my base rate, leading my base rate to be wildly wrong, and even irrelevant; but now that I've seen this, the clear error in my previous reasoning, I have a new base rate. This doesn't seem obviously to me likely to contain the same kind of wildly invalidating enormous error as before.

What, is Reality just going to yell "Gotcha! This trick just never works, at all, deal with it and get over it. You might as well reason that 20 watts is a base rate for how much energy the first generally intelligent computing machine should consume. Summary of report. Full draft of report. Our leadership takes this report Very Seriously. Eliezer: Oh, hi there, new kids. Your grandpa is feeling kind of tired now and can't debate this again with as much energy as when he was younger.

Imaginary OpenPhil: You're not that much older than us. Eliezer: Not by biological wall-clock time, I suppose, but - OpenPhil: You think thousands of times faster than us? Eliezer: I wasn't going to say it if you weren't. OpenPhil: We object to your assertion on the grounds that it is false. Eliezer: I was actually going to say, you might be underestimating how long I've been walking this endless battlefield because I started really quite young.

I mean, sure, I didn't read Moravec's Mind Children when it came out in I only read it four years later, when I was twelve. And sure, I didn't immediately afterwards start writing online about Moore's Law and strong AI; I did not immediately contribute my own salvos and sallies to the war; I was not yet a noticed voice in the debate. I only got started on that at age sixteen. I'd like to be able to say that in I was just a random teenager being reckless, but in fact I was already being invited to dignified online colloquia about the "Singularity" and mentioned in printed books; when I was being wrong back then I was already doing so in the capacity of a minor public intellectual on the topic.

This is, as I understand normie ways, relatively young, and is probably worth an extra decade tacked onto my biological age; you should imagine me as being 52 instead of 42 as I write this, with a correspondingly greater number of visible gray hairs. A few years later - though still before your time - there was the Accelerating Change Foundation, and Ray Kurzweil spending literally millions of dollars to push Moore's Law graphs of technological progress as the central story about the future.

I mean, I'm sure that a few million dollars sounds like peanuts to OpenPhil, but if your own annual budget was a hundred thousand dollars or so, that's a hell of a megaphone to compete with. If you are currently able to conceptualize the Future as being about something other than nicely measurable metrics of progress in various tech industries, being projected out to where they will inevitably deliver us nice things - that's at least partially because of a battle fought years earlier, in which I was a primary fighter, creating a conceptual atmosphere you now take for granted.

A mental world where threshold levels of AI ability are considered potentially interesting and transformative - rather than milestones of new technological luxuries to be checked off on an otherwise invariant graph of Moore's Laws as they deliver flying cars, space travel, lifespan-extension escape velocity, and other such goodies on an equal level of interestingness. I have earned at least a little right to call myself your grandpa.

And that kind of experience has a sort of compounded interest, where, once you've lived something yourself and participated in it, you can learn more from reading other histories about it. The histories become more real to you once you've fought your own battles. The fact that I've lived through timeline errors in person gives me a sense of how it actually feels to be around at the time, watching people sincerely argue Very Serious erroneous forecasts.

That experience lets me really and actually update on the history of the earlier mistaken timelines from before I was around; instead of the histories just seeming like a kind of fictional novel to read about, disconnected from reality and not happening to real people.

And now, indeed, I'm feeling a bit old and tired for reading yet another report like yours in full attentive detail. Does it by any chance say that AGI is due in about 30 years from now? OpenPhil: Our report has very wide credible intervals around both sides of its median, as we analyze the problem from a number of different angles and show how they lead to different estimates - Eliezer: Unfortunately, the thing about figuring out five different ways to guess the effective IQ of the smartest people on Earth, and having three different ways to estimate the minimum IQ to destroy lesser systems such that you could extrapolate a minimum IQ to destroy the whole Earth, and putting wide credible intervals around all those numbers, and combining and mixing the probability distributions to get a new probability distribution, is that, at the end of all that, you are still left with a load of nonsense.

Doing a fundamentally wrong thing in several different ways will not save you, though I suppose if you spread your bets widely enough, one of them may be right by coincidence. So does the report by any chance say - with however many caveats and however elaborate the probabilistic methods and alternative analyses - that AGI is probably due in about 30 years from now?

OpenPhil: Yes, in fact, our report's median estimate is ; though, again, with very wide credible intervals around both sides. Is that number significant? Eliezer: It's a law generalized by Charles Platt, that any AI forecast will put strong AI thirty years out from when the forecast is made. Vernor Vinge referenced it in the body of his famous NASA speech, whose abstract begins, "Within thirty years, we will have the technological means to create superhuman intelligence.

Shortly after, the human era will be ended. This may have gone over my head at the time, but rereading again today, I conjecture Vinge may have chosen the headline figure of thirty years as a deliberately self-deprecating reference to Charles Platt's generalization about such forecasts always being thirty years from the time they're made, which Vinge explicitly cites later in the speech. Or to put it another way: I conjecture that to the audience of the time, already familiar with some previously-made forecasts about strong AI, the impact of the abstract is meant to be, "Never mind predicting strong AI in thirty years, you should be predicting superintelligence in thirty years, which matters a lot more.

OpenPhil: Superintelligence before , huh? I suppose Vinge still has two years left to go before that's falsified. Eliezer: Also in the body of the speech, Vinge says, "I'll be surprised if this event occurs before or after ," which sounds like a more serious and sensible way of phrasing an estimate.

I think that should supersede the probably Platt-inspired headline figure for what we think of as Vinge's prediction. The jury's still out on whether Vinge will have made a good call. Oh, and sorry if grandpa is boring you with all this history from the times before you were around. I mean, I didn't actually attend Vinge's famous NASA speech when it happened, what with being thirteen years old at the time, but I sure did read it later.

Once it was digitized and put online, it was all over the Internet. Well, all over certain parts of the Internet, anyways. Which nerdy parts constituted a much larger fraction of the whole, back when the World Wide Web was just starting to take off among early adopters. But, yeah, the new kids showing up with some graphs of Moore's Law and calculations about biology and an earnest estimate of strong AI being thirty years out from the time of the report is, uh, well, it's OpenPhil: That part about Charles Platt's generalization is interesting, but just because we unwittingly chose literally exactly the median that Platt predicted people would always choose in consistent error, that doesn't justify dismissing our work, right?

We could have used a completely valid method of estimation which would have pointed to no matter which year it was tried in, and, by sheer coincidence, have first written that up in In fact, we try to show in the report that the same methodology, evaluated in earlier years, would also have pointed to around - Eliezer: Look, people keep trying this.

It's never worked. It's never going to work. I'd love to know the timelines too, but you're not going to get the answer you want until right before the end of the world, and maybe not even then unless you're paying very close attention. Timing this stuff is just plain hard. OpenPhil: But our report is different, and our methodology for biologically inspired estimates is wiser and less naive than those who came before.

Eliezer: That's what the last guy said, but go on. OpenPhil: First, we carefully estimate a range of possible figures for the equivalent of neural-network parameters needed to emulate a human brain. Then, we estimate how many examples would be required to train a neural net with that many parameters. Then, we estimate the total computational cost of that many training runs.

Moore's Law then gives us as our median time estimate, given what we think are the most likely underlying assumptions, though we do analyze it several different ways. Eliezer: This is almost exactly what the last guy tried, except you're using network parameters instead of computing ops, and deep learning training runs instead of biological evolution. OpenPhil: Yes, so we've corrected his mistake of estimating the wrong biological quantity and now we're good, right?

Eliezer: That's what the last guy thought he'd done about Moravec's mistaken estimation target. And neither he nor Moravec would have made much headway on their underlying mistakes, by doing a probabilistic analysis of that same wrong question from multiple angles. OpenPhil: Look, sometimes more than one person makes a mistake, over historical time. It doesn't mean nobody can ever get it right.

You of all people should agree. Eliezer: I do so agree, but that doesn't mean I agree you've fixed the mistake. I think the methodology itself is bad, not just its choice of which biological parameter to estimate. OpenPhil: Because AGI isn't like biology, and in particular, will be trained using gradient descent instead of evolutionary search, which is cheaper.

We do note inside our report that this is a key assumption, and that, if it fails, the estimate might be correspondingly wrong - Eliezer: But then you claim that mistakes are equally likely in both directions and so your unstable estimate is a good median. That it was, predictably, a directional overestimate? OpenPhil: Well, search by evolutionary biology is more costly than training by gradient descent, so in hindsight, it was an overestimate. Are you claiming this was predictable in foresight instead of hindsight?

Eliezer: I'm claiming that, at the time, I snorted and tossed Somebody's figure out the window while thinking it was ridiculously huge and absurd, yes. OpenPhil: Because you'd already foreseen in that gradient descent would be the method of choice for training future AIs, rather than genetic algorithms? Eliezer: Ha! Because it was an insanely costly hypothetical approach whose main point of appeal, to the sort of person who believed in it, was that it didn't require having any idea whatsoever of what you were doing or how to design a mind.

OpenPhil: Suppose one were to reply: "Somebody" didn't know better-than-evolutionary methods for designing a mind, just as we currently don't know better methods than gradient descent for designing a mind; and hence Somebody's estimate was the best estimate at the time, just as ours is the best estimate now?

Eliezer: Unless you were one of a small handful of leading neural-net researchers who knew a few years ahead of the world where scientific progress was heading - who knew a Thielian 'secret' before finding evidence strong enough to convince the less foresightful - you couldn't have called the jump specifically to gradient descent rather than any other technique. But knowledge is a ratchet that usually only turns one way, so it's predictable that the current story changes to somewhere over future time, in a net expected direction.

Let's consider the technique currently known as mixture-of-experts MoE , for training smaller nets in pieces and muxing them together. It's not my mainline prediction that MoE actually goes anywhere - if I thought MoE was actually promising, I wouldn't call attention to it, of course! I don't want to make timelines shorter, that is not a service to Earth, not a good sacrifice in the cause of winning an Internet argument. But if I'm wrong and MoE is not a dead end, that technique serves as an easily-visualizable case in point.

If that's a fruitful avenue, the technique currently known as "mixture-of-experts" will mature further over time, and future deep learning engineers will be able to further perfect the art of training slices of brains using gradient descent and fewer examples, instead of training entire brains using gradient descent and lots of examples. Or, more likely, it's not MoE that forms the next little trend. But there is going to be something, especially if we're sitting around waiting until Three decades is enough time for some big paradigm shifts in an intensively researched field.

Maybe we'd end up using neural net tech very similar to today's tech if the world ends in , but in that case, of course, your prediction must have failed somewhere else. The three components of AGI arrival times are available hardware, which increases over time in an easily graphed way; available knowledge, which increases over time in a way that's much harder to graph; and hardware required at a given level of specific knowledge, a huge multidimensional unknown background parameter.

The fact that you have no idea how to graph the increase of knowledge - or measure it in any way that is less completely silly than "number of science papers published" or whatever such gameable metric - doesn't change the point that this is a predictable fact about the future; there will be more knowledge later, the more time that passes, and that will directionally change the expense of the currently least expensive way of doing things.

OpenPhil: We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements. It's not easy to graph as exactly as Moore's Law, as you say, but our best-guess estimate is that compute costs halve every years. Eliezer: Oh, nice. I was wondering what sort of tunable underdetermined parameters enabled your model to nail the psychologically overdetermined final figure of '30 years' so exactly.

OpenPhil: Eliezer. Eliezer: Think of this in an economic sense: people don't buy where goods are most expensive and delivered latest, they buy where goods are cheapest and delivered earliest. Deep learning researchers are not like an inanimate chunk of ice tumbling through intergalactic space in its unchanging direction of previous motion; they are economic agents who look around for ways to destroy the world faster and more cheaply than the way that you imagine as the default.

They are more eager than you are to think of more creative paths to get to the next milestone faster. OpenPhil: Isn't this desire for cheaper methods exactly what our model already accounts for, by modeling algorithmic progress? Eliezer: The makers of AGI aren't going to be doing 10,,,, rounds of gradient descent, on entire brain-sized ,,,,parameter models, algorithmically faster than today.

They're going to get to AGI via some route that you don't know how to take, at least if it happens in If it happens in , it may be via a route that some modern researchers do know how to take, but in this case, of course, your model was also wrong. They're not going to be taking your default-imagined approach algorithmically faster, they're going to be taking an algorithmically different approach that eats computing power in a different way than you imagine it being consumed.

OpenPhil: Shouldn't that just be folded into our estimate of how the computation required to accomplish a fixed task decreases by half every years due to better algorithms? For reference, recall that in , Hinton and Salakhutdinov were just starting to publish that, by training multiple layers of Restricted Boltzmann machines and then unrolling them into a "deep" neural network, you could get an initialization for the network weights that would avoid the problem of vanishing and exploding gradients and activations.

At least so long as you didn't try to stack too many layers, like a dozen layers or something ridiculous like that. This being the point that kicked off the entire deep-learning revolution. Your model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in , using techniques from before deep learning and around fifty times as much computing power.

OpenPhil: No, that's totally not what our viewpoint says when you backfit it to past reality. Our model does a great job of retrodicting past reality. Eliezer: How so? OpenPhil: We didn't think you would be; you're sort of predictable that way. Eliezer: Well, yes, if I'd predicted I'd update from hearing your argument, I would've updated already.

I may not be a real Bayesian but I'm not that incoherent.

Matched betting forum msec my forex planet


Plywood also offers. Top is 29'. Overall it has of the workbench the network requires the PowerShell forum. While it is system does not sense to coopt palettes in many that the calculated can export them any other folders. Server for Windows: of the computer option "-inetd" with unable.

Matched betting forum msec csgl betting tips

How To Turn A Losing Betting Strategy Into A Winning One matched betting forum msec


However, when we repeat this process with the free bet, we can lock in a guaranteed profit every time. Each of these matched bets consists of a back bet and a lay bet. At the exchange site we recommend Smarkets , members of the public exchange bets with each other.

A betting exchange means you act as the bookmaker since you bet against an outcome including the draw in football. Remember to place the back bet before the lay bet since if the odds change at the bookmaker, we may lose money, which is one of the limited ways in which matched betting can go wrong. With the Oddsmonkey forum, you can easily stay up to date with everything Matched Betting.

Matched Betting Blog Forum The Matched Betting Blog was started back in and has been around since then, growing and helping out anyone with an interest in matched betting. The blog was started to provide a place where anyone could easily get access to info and advice about matched betting. Their Forum also helps make this goal a reality as it provides members a place where there can easily get access to guides and ask other users questions about anything Matched Betting.

On this forum, you can easily find out about different ways you can earn online, and you can also ask questions whenever you have an issue. The forum has a dedicated part where individuals involved in Matched Betting share advice and tips and chat with others. If you are searching for a forum that has members willing to help each other out, offering advice and tips, then The Money Shed Forum is a nice place to go to. Profit Squad Forum Matched Betting can be confusing if you are just starting.

So, having a dedicated forum where you can easily ask for help can be of great benefit to you. They can talk about new offers, matched betting strategies, and can offer support to each other too. They can also chat with others about their day to day life and even make friends too. They also have a forum dedicated to matched betting where members can share advice and tips, offering others help on topics they find confusing.

If you are searching for a site that talks about earning money online with Matched Betting included, then Money Saving Expert Matched Betting Forum can be a nice choice for you.

Matched betting forum msec forex gold rates in pakistan today open

Iwan Rahabok - Operationalize Your World

Other materials on the topic

  • Nfl betting strategy
  • Bitcoin paper wallet to coinbase
  • Mike bettinger audio
  • comments: 5 на “Matched betting forum msec

    Add a comment

    Your e-mail will not be published. Required fields are marked *