It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Measuring Computing Power - Ghz versus Variables

page: 1
0

log in

join
share:

posted on Apr, 9 2004 @ 10:04 AM
link   
I've been reading the John Titor chronicles courtesy of the mysticfish site (www.mysticfish.net...).

One item in particular got me to thinking. I'll quote it here, it's easily found about 60% of the way down on the page I mentioned above. It's line 380.

"(7)What is the speed of the average computer in the future? I am assuming it is in ghZ, if it is higher, could you post the name of that hZ measurement and its relation to the ghZ?

Ghz is not a useful measurement. Computers are no longer measured by their speed as mush as the number of variables (not calculations) they can handle per second."

My question is, how different is this? Variables are ultimately used in calculations or logic in some aspect. Is it possible to use linear regression to determine the average number of variables in a "calculation" and correlate a number that translates to calcs per second?

I'm still debating what I think about this whole Titor business but I am interested in the usage of variables per second as a measure of computing power.



posted on Apr, 9 2004 @ 10:06 AM
link   
Website courtesy of ATS member Spring. I apologize for the exclusion above.



posted on Apr, 9 2004 @ 10:17 AM
link   
Sounds like hogwash from someone that has no idea what he's talking about. Computers will always be measured in "calculations per second" or "instructions per second" in some way. The number of variables any computer can handle at once is a simple function of available memory. Just by adding more RAM to our ATS server, I was able to increase the memory allowance to MySQL server, and bump up the number of concurrent queries (variables). Doubling the number provided a nearly 5x improvement in overall performance. Another indication that Titor is full of S__T.



posted on Apr, 9 2004 @ 10:26 AM
link   
Skeptic, thanks for the reply. It may be hogwash but I wanted to think about the concept.

Addressable memory constraints are another measure of the CPU's strength. Newer CPUs can typically address more memory. Remember the Commodore 64, it could only address 64K of memory. That was a constraint that implied its power.

I thought that the motherboard bus also affected how much memory could be addressed. If so, then we are talking about the subsystem (CPU and motherboard) that comprises the measurable power.



posted on Apr, 9 2004 @ 10:33 AM
link   
At university that I got my degree at (B.S. in Computer Science) we usually referred to the speed of a computer in Millions of Instructions Per Second (MIPS) This marker takes in to account, the processor speed, the bus speed, memory speed, and the processor cache speed.

The number of variable that can be handled at once (As SO just mentioned) is related to the number of registers in the CPU. They have names such as AX, BX, CX, DX, ect. To actually do a calculation the CPU does the equivalent of.. Add AX to BX and store the result in CX. When it wants to store the result for later use it puts it in the Level 1 cache (preferrably) and then the L2 cache if that was full, and then in main memory (RAM), and lastly on the hard drive. SkepticOverlord is correct in that memory is the main bottle neck of computers. There are moments when the cpu does nothing and waits for the next instruction or variable to be fetched from memory. This is one of the main focuses of the Computer Science field. How do we keep the CPU from being idle.

If you wish you can see what is in the CPU registers by typing "DEBUG" in a dos command window, and then typing 'R' (enter) TaDa! there they are. type 'Q' to quit. Be careful with the DEBUG program. It's possible to mess things up if you don't know what you're doing. In Windows 3 you could actually overwrite the operating system memory.. hee hee. Later versions of Windows protects this memory range.

[Edited on 9-4-2004 by dbates]



posted on Apr, 9 2004 @ 10:43 AM
link   

Originally posted by dbates
At university that I got my degree at (B.S. in Computer Science) we usually referred to the speed of a computer in Millions of Instructions Per Second (MIPS) This marker takes in to account, the processor speed, the bus speed, memory speed, and the processor cache speed.

The number of variable that can be handled at once (As SO just mentioned) is related to the number of registers in the CPU. They have names such as AX, BX, CX, DX, ect. To actually do a calculation the CPU does the equivalent of.. Add AX to BX and store the result in CX. When it wants to store the result for later use it puts it in the Level 1 cache (preferrably) and then the L2 cache if that was full, and then in main memory (RAM), and lastly on the hard drive. SkepticOverlord is correct in that memory is the main bottle neck of computers. There are moments when the cpu does nothing and waits for the next instruction or variable to be fetched from memory. This is one of the main focuses of the Computer Science field. How do we keep the CPU from being idle.


dbates, thanks for reminding me about the comsci days. I neglected to think about things like that (registers). Of course, I have an MIS degree, not comsci, so I took more business classes than low level classes. I regret it some times because I never got to do assembler or build my own compiler.

I'm just trying to find a way to make Titor's claim work and as you can see I am struggling. I don't want to say what I really think yet.



posted on Apr, 9 2004 @ 10:59 AM
link   
If you want to do further stud on this matter I would suggest visiting arstechnia.com This site is beyond nerdy and the article I provided a link for is an indepth discussion of a memory useage. You might try searching the site for information on the topic you are researching.

Good Luck

[Edited on 9-4-2004 by dbates]



posted on Apr, 9 2004 @ 11:02 AM
link   
I think Titor has been properly debunked enough times (here and elsewhere) to not be concerned at all overy any of his claims.



posted on Apr, 9 2004 @ 03:15 PM
link   
Well this topic sucked. I was hoping to get some feedback on the original question but instead all I heard was how Titor is a fraud. Perhaps I should have posted this in another section and asked that question to get answers to my original question.

dbates, Thanks for that link -- I'm still flipping through it. Cool stuff.



posted on Apr, 9 2004 @ 04:03 PM
link   
titan, I think I follow what you're asking here. You also bring up an interesting 'clue' as to your answer IMO. As does dbates as well.

You brought up the 'Commodore 64' and the fact that it got it's name based on the '64k' memory usage. Then dbates said that during his time at school they used the method of 'Millions of Instructions Per Second (MIPS)' which took into account bus speed, processor speed, etc. The common method of today and the recent past does in fact use Processor Type and/or Speed such as 286, 386, 486DX (33MHz), Pentium 100MHz (586 or 686 If I remember), etc, etc...Now in the ?GHz range and so forth. The whole point is simply to show that the method used in identification has infact changed even within our recent history.

Now the fact still remains that the 'Commodore 64' was more of 'Production Name' and not exactly the same thing. After all, for those who understood more about computers did use terms like 8088 and so forth and may have even refered to the 'Commodore 64' using a more technical name as well, who knows. Still, the possibility still remains that the 'Name or Term' used for computer capability might truely be different in Titor's Future Timie Frame. In fact, the reason could be something to do with a revolutionary new method of Computer Manufacture also, which made a whole new set terms in which to identify with the newer methods.

So while that does give a little room for Titor's claims to still be seen as valid, it still seems a bit thin to justify his response to the question. He said:
"Ghz is not a useful measurement. Computers are no longer measured by their speed as mush as the number of variables (not calculations) they can handle per second."

That's still a pretty lame answer. In fact, it's not even an answer at all. If what he said is true, GHz not used anymore, then I would expect him to use whatever term they did use in the future, even if it wasn't completely understood by us in todays terminology. He could have just as easily said "Ghz isn't used anymore, we use the term Vps, for example I use a 64M-Vps PC in my future home, meaning 64Million Veriable per Sec. or something like that. Honestly it sounds like he wanted to give the impression that they had long passed Ghz speeds, but couldn't come up with a believable 'Future Term' off the top of his head so he avoided giving a clear answer.

Plus the fact that Variables Per Second, Variables being something other than Calculations or whatever, just seems like a really shakey answer IMO. I'm not saying it couldn't be true, but it sure doesn't sound very likely. It really sounds like he just didn't have a Good, Believable, High Tech Word to use instead of Ghz. Although for as smart as he 'seemed' to be, I'm suprised he didn't just use something like Terrabytes or something which would have been somewhat believable I suppose.



posted on Apr, 10 2004 @ 12:17 AM
link   
I would think that Titor would have anticipated computer questions, after all, what was he and all his
fans using to communicate, not to mention his overall goal. This would have been easy to research by
going to Intel's site and reading a few of their "White Papers" on processors.
If I were in his shoes, I would have said something like: We still use the digital processors you
have but we measure them in SOCC's at a fixed bus speed of 20 Ghz. We also use massively parallel
processors measured in IOS. Then he controls the questioning by having made up neet acroynms
that he can then pontificate upon for the rest of the evening. All from having read a few tech papers
available from Intel BACK THEN.
My personal opinion was that he spent too much time with politics and not enough time in the
tech areas. The tech areas are his "bads" and politics.......I will let that definition go.
He was a good Seagull.

/\/ight\/\/ing



posted on Apr, 10 2004 @ 12:53 AM
link   
While all points here have been fully valid, I'd like to point out the following:

Quantum Computing

There's much speculation on whether quantum computers could actually make use of the 'multiverse theory' and solve an infinite number of operations instantaneously. By this, I mean that a giant encryption problem that would take today's fastest supercomputers 5-10 years when utilised by a team of 20-30 mathematicians would be solvable in seconds, if not immediately. If they cannot, then they'll just be hella-faster, but if they can, then this would fully account for his claim. In the case that quantum computing were the way of the future, then everything would matter on how many variables can be handled at once in the sense that depending on the number of qbits you have working, you'll have a different number of solving variables at any given time. This brings up the greater issue that an old Intel-5100 probably wouldn't be able to communicate with things like this.

On the defense for that, were there to be a quantum computer revolution over the next 40 years, then there would likely be a large number of transitional-stage computers, that can communicate with quantum-computers and standard-computers. Were he to get an Intel-5100, he could translate code stored in one of these transitional machines into normal comp, then do his Intel business, then send it back to the transitional, switch it to quantum, and be done.

There's a way to explain everything guys, whether it is a stretch or not.



posted on Apr, 10 2004 @ 01:16 AM
link   
640K ought to be enough for anybody!


[Edited on 4-10-2004 by EmbryonicEssence]



new topics

top topics



 
0

log in

join