It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Next Big Blue-Collar Job Is Coding

page: 5
16
<< 2  3  4   >>

log in

join
share:

posted on Feb, 11 2017 @ 09:15 AM
link   
a reply to: mOjOm


...Again. The problem isn't the change over with technology that is killing us. It's our not being prepared for the shift in what to do with how it effects society as a whole. We make these things so we don't have to work so much. They're doing exactly what we made them for. We just haven't figured out a system of what to do with ourselves now that we aren't needed as laborers all the time. We're working with an old Human Design instead of one that is up to date like using old software in a modern business structure. We need an update on what Humanity's Purpose is in the modern age. Human Living 2.0 because Human Living 1.0 is too old to be of any use now.



Not sure how you define Human Purpose 1.0. But. Societies are ruled, and defined, by a few who hold the reins - and thus are individuals ruled and defined. "History" documents how most humans have always accepted their overlords' "guidance" (and limitations) in defining their 'purpose.' In tandem, various disciplines and messiahs have attempted to help people break their bonds. Absent early exposure to such disciplines or a 'messiah,' some come to wisdom in their later years. But entire cultures? Not so much.

There is a new movement to teach things like mindfulness, meditation and such in public schools. If the movement is not derailed (which it likely will be), then humanity might have a chance. Otherwise, no.




posted on Feb, 13 2017 @ 02:39 PM
link   

originally posted by: Aazadan
While the computer is capable of learning through maximizing score based feedback and random experimentation, it cannot write it's own genetic algorithm out of nothing. Calling it intelligence is a bit of a misnomer in my opinion. It's just rapidly iterating through a formula that creates a pattern. Eventually the pattern becomes optimal.


I work in R&D. This is a nice way of describing AI as it currently exists. There is no ability to combine random sets of items and come up with meaningful new ideas. A computer has no way of checking whether "Java" and "Javascript" have some common denominator that intersects. Humans on the other hand can very quickly discover after a little time spent comparing both languages that they are completely different.

These are the sorts of challenges a computer will need to solve if it really wants to be at a level where it acts like a human. A strong AI needs to have the ability to "sense" meaning. This is almost nonsensical though the more you think about it. How do we "sense" or "intuit" meaning? What is it about our reasoning, in combination with past experiences, that allows us to know something as true or false and to figure out correct context? The labels and definitions aren't sufficient as ways to look for cross joins. There has to be some representational model that's completely different from the way we currently store data in a computer.

edit on 13-2-2017 by ThingsThatDontMakeSense because: (no reason given)



posted on Feb, 13 2017 @ 07:54 PM
link   

originally posted by: ThingsThatDontMakeSense
I work in R&D. This is a nice way of describing AI as it currently exists. There is no ability to combine random sets of items and come up with meaningful new ideas. A computer has no way of checking whether "Java" and "Javascript" have some common denominator that intersects. Humans on the other hand can very quickly discover after a little time spent comparing both languages that they are completely different.


I've been having to deal with way too much Automata theory lately. I've got a semester long project to design a language from scratch, and then to build a compiler for it. It's an interesting class. But anyways, since I'm taking it, I actually have to look at things like comparing Java to Javascript. There are ways to do it such as sending some test strings through a tokenizer and seeing if it gives the same inputs and going down a parse tree. If the languages have something in common, they should parse out the same. You could probably compare parsing tables too.

Either way, I get what you're saying. I just said something similar to a person in another thread.

Data is just a blob to a computer, it takes people to look at the data, filter it, and arrange it in such a way that an algorithm can go over it. In most cases said person is also going to have to tell the computer what algorithm to use, because to the computer it's all meaningless data. It's only people looking at the results that put any meaning on the numbers it spits out.



posted on Feb, 14 2017 @ 10:30 AM
link   
a reply to: ThingsThatDontMakeSense


... How do we "sense" or "intuit" meaning? What is it about our reasoning, in combination with past experiences, that allows us to know something as true or false and to figure out correct context? The labels and definitions aren't sufficient as ways to look for cross joins. There has to be some representational model that's completely different from the way we currently store data in a computer.




Pretty sure it has to do with parallel processing, not just storage. I know of one project (partially funded by DARPA, btw) that developed a bio-supercomputer model the size of a book, does parallel processing. Now working towards a full-scale version. Uses nanotechnology, proteins and Adenosine triphosphate (ATP).


Biological supercomputer uses the 'juice of life’

Using nanotechnology, proteins and a chemical that powers cells in everything from trees to people, researchers have built a biological supercomputer.



Scientists have managed to shrink a supercomputer to the size of a book using biological motors

…can solve mathematical problems as quickly as a supercomputer because it operates in parallel rather than in sequence.

Researchers from Lund University, Linnaeus University, University of California Berkeley, Dresden University of Technology, Max Planck Institute of Molecular Cell Biology and Genetics, the University of Liverpool, McGill University, Molecular Sense Ltd and Philips Innovation Services have used nanotechnology to create molecular motors that can perform several calculations simultaneously rather than sequentially. ….

…….Their research, entitled “Parallel computation with molecular-motor-propelled agents in nanofabricated networks“ is published in the journal Proceedings of the National Academy of Sciences (PNAS).



The model “biocomputer,” which is roughly the size of a book, is powered by Adenosine triphosphate (ATP) — dubbed the “molecular unit of currency.”



ALSO SEE:

Parallel computation with molecular-motor-propelled agents in nanofabricated networks

Significance

Electronic computers are extremely powerful at performing a high number of operations at very high speeds, sequentially. However, they struggle with combinatorial tasks that can be solved faster if many operations are performed in parallel. Here, we present proof-of-concept of a parallel computer by solving the specific instance [2, 5, 9] of a classical nondeterministic-polynomial-time complete (“NP-complete”) problem, the subset sum problem. The computer consists of a specifically designed, nanostructured network explored by a large number of molecular-motor-driven, protein filaments. This system is highly energy efficient, thus avoiding the heating issues limiting electronic computers. We discuss the technical advances necessary to solve larger combinatorial problems than existing computation devices, potentially leading to a new way to tackle difficult mathematical problems.




posted on Feb, 14 2017 @ 04:38 PM
link   

originally posted by: soficrow
Pretty sure it has to do with parallel processing, not just storage. I know of one project (partially funded by DARPA, btw) that developed a bio-supercomputer model the size of a book, does parallel processing. Now working towards a full-scale version. Uses nanotechnology, proteins and Adenosine triphosphate (ATP).


Parallelization has mostly reached it's limits. There's occasional breakthroughs on the software side that allow problems to be structured in a way that are more parallelized but on the hardware side there's diminishing returns. Fortunately, this is something that doesn't require going too far into hardware design to explain. It can be fully explained with just math. Each task, algorithm, and so on is subject to different levels of parallel computing. Breaking encryption is something that scales linear, you can simply continue to throw more processors at it. On the other hand a very common algorithm known as binary search which divides a search area in half, compares high/low, and repeats is entirely linear (this is relevant for something like big data where you're searching high volume disks). Most tasks are only X% parallelizable, I'm going to use 90% for my example. What this means is that if the CPU is being as efficient as it can be you're going to run into the following limit for how much of a performance gain more cores gets you:

(parallel/#cpu's) + (100-parallel)

Or with numbers attached
(90/# cpu's) + 10

So with 1 CPU your runtime is 100, with 2 CPU's it's 55, with 3 CPU's it's 40, with 4 CPU's it's 32.5, with 10 CPU's it's 19, with 100 CPU's it's 10.9, with 1000 CPU's it's 10.09 and so on.

So the issue is that you run into diminishing returns. That second CPU cut your runtime nearly in half, but jumping from 10 to 100 didn't even halve your runtime, and jumping from 100 to 1000 only gives you an 8% increase. All the while you're drastically increasing your energy use, which brings about heat and battery issues.

The truth is, for general applications, going past about 50 CPU's just doesn't matter and Intel is already building 72 core systems. We are rapidly approaching the limit of what more cores can do.

The real reason for these alternative types of processors is that we've hit another limit, which is that of the silicon. Modern chips are so efficient that smaller materials to pack in more transistors doesn't work, we've hit the miniaturization limit. We've run into other limits too, such as sending electricity across a circuit board fast enough, electricity obeys physical limits and just doesn't flow fast enough to get us faster CPU's. What all of this means is that we've just about hit the extremes of what silicon chips can allow for. Biological processors don't have these limits, which means we can potentially get faster processors out of them, how much faster I'm not sure.



posted on Feb, 14 2017 @ 08:12 PM
link   

originally posted by: Aazadan
I've been having to deal with way too much Automata theory lately. I've got a semester long project to design a language from scratch, and then to build a compiler for it. It's an interesting class. But anyways, since I'm taking it, I actually have to look at things like comparing Java to Javascript. There are ways to do it such as sending some test strings through a tokenizer and seeing if it gives the same inputs and going down a parse tree. If the languages have something in common, they should parse out the same. You could probably compare parsing tables too.


I like how you are thinking through the steps to figure out how a comparison would work in practice. However, comparing grammar versus run-time efficiencies and the nature of how interpreted languages differ from compile time languages shows how a computer still has no easy way to self-reflect on its own structure to understand the fundamental differences between the two languages beyond parsing and tokenizing the Backus–Naur form. Imagine, for instance, a computer trying to figure out in which situations it would make more sense to use Java over Javascript. =)

Every time we attempt to add in some logic to process a comparison we do so by hand. There is no general purpose routine for finding differences that can be solidified into some sort of logical object independent of human assigned abstractions.


Either way, I get what you're saying. I just said something similar to a person in another thread.

Data is just a blob to a computer, it takes people to look at the data, filter it, and arrange it in such a way that an algorithm can go over it. In most cases said person is also going to have to tell the computer what algorithm to use, because to the computer it's all meaningless data. It's only people looking at the results that put any meaning on the numbers it spits out.


Exactly, somehow the computational and logical components seem to go together to create a model of "truthful relationships" versus "illogical relationships" that eventually create an object. The process of grouping and ungrouping and the boundaries of where we stop determine the logical abstraction.

To be more clear about what I mean let's take an example of the word "hip" and "hippopotamus." As humans we can immediately see there is no relationship. However, if I were to say "canine" versus "dog" we could easily see a dog is a canine (an is-a relationship resulting from inheritance), but dogs aren't all canines. This means even if I write these two sentences:

1. Dogs are canines.
2. Dogs are not canines.

We as humans can still process both sentences and understand the meaning. The first sentence establishes the "is-a" relationship. The second relationship is understood to be a comment about the predicate in the sense that only all canines are all canines and therefore dogs can't represent the whole category. A better sentence would be "Dogs are not all canines" but our human brain can fill in the gap.

Establishing relationships and understanding when to process the context independent any cue to make a sentence true or false is a large part of what makes us work as humans. Implementing this in software though is rather hard. We can easily create a tree structure and understand how canines have a parent relationship to dogs. However, how would we know how to process sentence 2?

"2. Dogs are not canines."

Either we accept it as false since dogs are a member of the genus canines or we accept it as true based on the assumptions that the sentence means "Dogs do not equal canines".

In code this would be more explicit:

A. Canine.IsChild(dogs) = true
B. Canine.IsEqual(dogs) = false

The incredible ability of the human mind is it doesn't require explicitness to understand the truth of a statement.

edit on 14-2-2017 by ThingsThatDontMakeSense because: (no reason given)



posted on Feb, 15 2017 @ 06:47 AM
link   
a reply to: Aazadan


Parallelization has mostly reached it's limits.



Really?




...The real reason for these alternative types of processors is that we've hit another limit, which is that of the silicon. ...we've just about hit the extremes of what silicon chips can allow for. Biological processors don't have these limits, which means we can potentially get faster processors out of them, how much faster I'm not sure.



I suspect there are greater potentials in protein-based processing.



posted on Feb, 15 2017 @ 06:50 AM
link   
a reply to: ThingsThatDontMakeSense

Now that we have established that we have one or two members here with the education and ability to survive the coming unemployment crisis, what about the 7 billion-odd who can't? Who won't?

We toss them out with the plastic?



posted on Feb, 15 2017 @ 07:06 AM
link   
a reply to: seasonal

also,plumbers, carpenters, sheet rock hanger,roofers, service technicians, electricians,steel workers, there is a whole huge list. sure maybe one day they will be able to build a machine that can do some off it, but they will never be able to build machines that can do it all.




edit on 15-2-2017 by hounddoghowlie because: (no reason given)



posted on Feb, 15 2017 @ 07:21 AM
link   

originally posted by: soficrow
Really?


There's still the potential to add more cores to a CPU, but you hit diminishing returns, go back to that math formula. The average home system these days is a dual core, many have quad cores, some have oct cores. Once you get to about 50, for business/home use there's no point in going further because at that point your bottleneck is almost entirely the part that has to remain in serial. For certain applications, you can parallelize an infinite amount using my previous example, breaking encryption is one of these, 3d rendering is another. But the tasks that are fully parallelizable are the minority. For most stuff, you reach a point where going on to the next stage of computation requires the previous stage to be complete and that's where the bottlenecks start to occur.

Using my previous example again, searching for data (usually in memory) is like this. Lets say you have an ordered list of numbers (1, 10, 50, 100, 500) which is a very common problem. And you're looking to find if the number 75 is in there. One common way to do this is by taking the midpoint, of the set, which is index 2 at this point (value 50), comparing that value to your value (50 to 75), and knowing it's higher. So you'll then compare the midpoint between index 2 and 4 (50 and 500), which is index 3, a value of 100. It's less, so you now compare index 2 and 3. And you'll see it's not found. It's basically the efficient guess and check approach people are taught as kids.

What you'll see though is that more processors doesn't speed this task up. Because each iteration is reliant on the previous iteration. So if a program is waiting on finding data, this can't be done in parallel. Waiting on finding data is rather common when using database applications.

In short, we can build machines now that hit the point where more cores don't really mean more value. Within 10 years they'll probably be available in the home. Even with tasks that do scale to the number of processors, going from 1 to 2 processors doubles your computation speed, going from 100 to 101 is only a 1% boost.


originally posted by: soficrow
Now that we have established that we have one or two members here with the education and ability to survive the coming unemployment crisis, what about the 7 billion-odd who can't? Who won't?

We toss them out with the plastic?


I don't see why people have to be employed. It's good psychologically for people to feel they're doing something useful, but that doesn't necessarily mean they have to work. Producing in other ways like art, literature, or music has value too. For those who don't like to do that stuff there's other constructive things to do like local farming, hunting, or even just being good conversation.

The real issue I see on a transition to an economy where not everyone is working, is that we've basically criminalized homelessness. I don't want to see a society where people have to turn to crime for support, because that means that in the end we'll end up throwing people in jail if they don't have jobs. Some towns are starting to do this already. A UBI fixes that, but we're atleast 10 years out from the government actually talking about a UBI.



posted on Feb, 15 2017 @ 07:21 AM
link   
a reply to: hounddoghowlie

Already happening. Not so much in North America yet, but elsewhere for sure.


Recent Automation in Construction Articles

Recently published articles from Automation in Construction




posted on Feb, 15 2017 @ 07:22 AM
link   

originally posted by: hounddoghowlie
also,plumbers, carpenters, sheet rock hanger,roofers, service technicians, electricians,steel workers, there is a whole huge list. sure maybe one day they will be able to build a machine that can do some off it, but they will never be able to build machines that can do it all.


They don't get credit for it, but plumbers are the single most important job in society today. They won't be going anywhere in the next 100 years. Their job might change a bit, but the job will continue to exist.



posted on Feb, 15 2017 @ 07:26 AM
link   
a reply to: Aazadan


I don't see why people have to be employed. It's good psychologically for people to feel they're doing something useful, but that doesn't necessarily mean they have to work. Producing in other ways like art, literature, or music has value too. For those who don't like to do that stuff there's other constructive things to do like local farming, hunting, or even just being good conversation.

The real issue I see on a transition to an economy where not everyone is working, is that we've basically criminalized homelessness. I don't want to see a society where people have to turn to crime for support, because that means that in the end we'll end up throwing people in jail if they don't have jobs. Some towns are starting to do this already. A UBI fixes that, but we're atleast 10 years out from the government actually talking about a UBI.



We agree on much. But methinks even the US government has been at least talking about a UBI, and certainly most other governments have been, at least in the developed world.



posted on Feb, 15 2017 @ 07:37 AM
link   
a reply to: soficrow

from what i looked at those are all monitoring and design machines, that aren' t even in production.
this from the very first one.


Accurate service-life prediction of structures is vital for taking appropriate measures in a time- and cost-effective manner. However, the conventional prediction models rely on simplified assumptions, leading to inaccurate estimations. The paper reviews the capability of machine learning in addressing the limitations of classical prediction models. This is due to its ability to capture the complex physical and chemical process of the deterioration mechanism. The paper also presents previous researches that proposed the applicability of machine learning in assisting durability assessment of reinforced concrete structures. The advantages of employing machine learning for durability and service-life assessment of reinforced concrete structures are also discussed in detail. The growing trend of collecting more and more in-service data using wireless sensors facilitates the use of machine learning for durability and service-life assessment. The paper concludes by recommending the future directions based on examination of recent advances and current practices in this specific area.

and everyone of those machines would need people to run them.

show me one where the machine can pull pipe through the studs, cut, fit, solder/ glue, abd do everything else that is involved in new plumbing, or even better yet go in someones home and repair it on it's own. then i'll worry. i'm not saying it's not gonna happen, it very well be when the earth becomes the utopia that many want, but it's not going to happen anytime in the near future.


edit on 15-2-2017 by hounddoghowlie because: (no reason given)



posted on Feb, 15 2017 @ 07:48 AM
link   
a reply to: Aazadan

that is the the truth, most people couldn't live with out them. well there some places were it's just fine to take a dump in the middle of the street, but else where plumbers are king.



posted on Feb, 15 2017 @ 11:08 AM
link   

originally posted by: soficrow
I suspect there are greater potentials in protein-based processing.


There are, I don't know how much though. One of my professors is a combined hardware/software engineer, he's one of those people who knows how to do literally everything. I've had a couple discussions with him over the years about how DNA based processing works and the future of hardware. I understand the basics of DNA computing (and I do mean the very basics) but not enough to understand the speed differences. For what it's worth, his opinion was that we would have biological computers on the market in ~15 years, at the same time though he thinks it will be fairly short lived because we'll have quantum computers for government and enterprise level corporations in 25, and since that's the level where we can actually still use faster processors (the needs for individual CPU's are declining, and as I said before we've about hit a wall there for computer speeds) the biological stuff won't really have time to take hold.

I don't know if that prediction is right or wrong, but the guy who made it is way smarter than me, and wrote his thesis on DNA computing so he knows his stuff which makes me inclined to believe it.

That's assuming of course that we actually have the ability to rewrite a bunch of current problems that computers solve, in a way that quantum computers can solve them. If it turns out that we can't, there's a bigger ceiling for biological ones.



posted on Feb, 15 2017 @ 03:36 PM
link   

originally posted by: soficrow
a reply to: ThingsThatDontMakeSense

Now that we have established that we have one or two members here with the education and ability to survive the coming unemployment crisis, what about the 7 billion-odd who can't? Who won't?

We toss them out with the plastic?


When people are made to feel useless they fight back. Social systems are malleable.

The idea of a universal basic income probably won't work though. It sounds nice, but for people who still have jobs and who have to work it will catalyze a class rift for the same reason most mid to upper income Americans are furious about socialistic programs that take away resources from those who toil to distribute to those who live on welfare programs.

It is not tenable without some form of reciprocity. Perhaps that's the key?

This isn't a Republican or a Democrat thing either. I was just listening to Colbert talk with Black Lives Matter leader DeRay Mckesson. www.youtube.com... Colbert was just as adamant about protecting what he has earned as the next Joe.

To make matters worse, we know what happens when governments try to run centrally planned economies. The USSR tried it and it was a complete failure. Soviet Russia's Gosplan directly led to their economic downfall.

fee.org...

Money for the sake of money just devalues the currency if no one is actually doing anything. So honestly I just don't see a UBI playing out well for a number of reasons.

On the other hand, there are a lot of innovations in financial technology right now. Our best hope is that someone creates a new exchange system that allows people to work among themselves in their own communities without government involvement.

Some search keywords: blockchain local exchange bitcoin

Necessity is the mother of invention, as they say.

Something has to give though, you are absolutely right about that.

edit on 15-2-2017 by ThingsThatDontMakeSense because: (no reason given)



posted on Feb, 15 2017 @ 06:18 PM
link   
a reply to: hounddoghowlie


...show me one where the machine can pull pipe through the studs, cut, fit, solder/ glue, abd do everything else that is involved in new plumbing, or even better yet go in someones home and repair it on it's own. then i'll worry. i'm not saying it's not gonna happen, it very well be when the earth becomes the utopia that many want, but it's not going to happen anytime in the near future.



I agree the trades are safer than most other jobs, but even the construction industry is up for re-conceptualization. Recent projects built homes using 3D printing with the plumbing and electrical printed into the wall panels. ...I have a file on cool projects but can't grab it now. Here's the wiki entry.



Around 2000, Khoshnevis's team at USC Vertibi began to focus on construction scale 3D printing of cementitious and ceramic pastes, encompassing and exploring automated integration of modular reinforcement, built-in plumbing and electrical services, within one continuous build process. This technology has only been tested at lab scale to date and controversially and allegedly formed the basis for recent efforts in China.





top topics



 
16
<< 2  3  4   >>

log in

join