It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Filesharing Conspiracy

page: 30
91
<< 27  28  29    31  32 >>

log in

join
share:

posted on Jul, 28 2010 @ 03:16 PM
link   
And in furtherance to my above post,
I will say that this voice replace test will
INITIALLY be of a LIKELY weak final result
BUT WILL HAVE significant improvement
as we refine the initial logic tree and
CONTINUE to run the test over & over again
over days and weeks on higher performance
hardware and further refining the logic tree
rules over each iteration.

These means the tests will become EVER MORE
sophisticated and professional in the final result
as the experience base builds up over time.

The BIG THING is that our neural net is MUCH FASTER
than ANY HUMAN at searching through and filtering
initial data to create new "Rules of Thumb" for its
logic tree database, so what takes an audio engineer
say 10 to 15 years to master, Midgrid will take less than
a YEAR to get to that master level of audio editing or
mixing expertise IF the initial rules are finely tuned
and expertly refined FROM THE OUTSET.

This Begs the Question of What Happens if the Neural Net
becomes DAMAGED from "Bad Initial Input"?

My answer is to Abort its Existence as an Abomination
and then Start Over with GOOD INITIAL Rules of Thumb!
Neural Nets have NO EMOTIONS (YET!) so destroying them
won't be such a bad thing on a moral basis if they're
"Damaged or Bad" ... I only see a problem in about
50 years when we put INDUSTRIAL STRENGTH neural net
software into Multi-thousand petaflop robotic hardware
that looks like US (i.e. Androids) which WILL begin to
emulate sentience and emotions to such a degree
that THEY would be essentially HUMAN in thought and action!
ONLY THEN would I have a problem with DESTROYING a neural
net for any specific reason!

Until that day, if the Neural Net gives you garbage output,
KILL IT DEAD and START OVER!

Hopefully YOU will still be SMART, SKILLED AND DILIGENT ENOUGH
to RECOGNIZE WHAT IS GARBAGE OUTPUT FROM a damaged
Neural Net!




posted on Jul, 28 2010 @ 03:37 PM
link   
there is no way i would ever download books, when or if anything happened
to our power grid or any doomday you can think of(im not saying there will be one, just if) ill be off in the mountains reading my book collection while others wish they had something more tangable than info stuck in things that no longer work



posted on Jul, 29 2010 @ 09:28 AM
link   

Originally posted by StargateSG7
And in furtherance to my above post,
I will say that this voice replace test will
INITIALLY be of a LIKELY weak final result
BUT WILL HAVE significant improvement
as we refine the initial logic tree and
CONTINUE to run the test over & over again
over days and weeks on higher performance
hardware and further refining the logic tree
rules over each iteration.

These means the tests will become EVER MORE
sophisticated and professional in the final result
as the experience base builds up over time.

The BIG THING is that our neural net is MUCH FASTER
than ANY HUMAN at searching through and filtering
initial data to create new "Rules of Thumb" for its
logic tree database, so what takes an audio engineer
say 10 to 15 years to master, Midgrid will take less than
a YEAR to get to that master level of audio editing or
mixing expertise IF the initial rules are finely tuned
and expertly refined FROM THE OUTSET.

This Begs the Question of What Happens if the Neural Net
becomes DAMAGED from "Bad Initial Input"?

My answer is to Abort its Existence as an Abomination
and then Start Over with GOOD INITIAL Rules of Thumb!
Neural Nets have NO EMOTIONS (YET!) so destroying them
won't be such a bad thing on a moral basis if they're
"Damaged or Bad" ... I only see a problem in about
50 years when we put INDUSTRIAL STRENGTH neural net
software into Multi-thousand petaflop robotic hardware
that looks like US (i.e. Androids) which WILL begin to
emulate sentience and emotions to such a degree
that THEY would be essentially HUMAN in thought and action!
ONLY THEN would I have a problem with DESTROYING a neural
net for any specific reason!

Until that day, if the Neural Net gives you garbage output,
KILL IT DEAD and START OVER!

Hopefully YOU will still be SMART, SKILLED AND DILIGENT ENOUGH
to RECOGNIZE WHAT IS GARBAGE OUTPUT FROM a damaged
Neural Net!





I don't know. I'm starting to grow pretty skeptical as this grows more and more into a spiel for your company. As it stands, your money hasn't been put where your mouth is even for the basic example that we set out aeons and many pages ago, never mind to the level of replacing someone with 15 years expertise. And I'm getting tired of it and don't buy it.

Now you say the results will likely be weak. I'm sorry, but that won't be good enough at all and it will be a fail for your grid. If you will need iterations then it's an utter fail. The experienced studio dude has rounded up the musicians, found the singer, recorded a very passable copy and mixed and produced it in a day and a half, shall we say. At the moment, this software looks at the same stage that a lot of neural net stuff I've looked at has been, totally uncommercial and unsaleable. Sure, in a few years time of tweaking the initial parameters MIGHT (huge MIGHT) be strong enough to perform one task as well as an expert. But experts also innovate and create. They have the knowledge to re-write the book. They should always be finding new and better way to do the same thing. Will the grid have the same level of flexibility? I'm starting to doubt it.

Who will decide if a neural net has bad input or has become damaged? I think you've just made your own argument for the replacement of human experts, "putting them out of a job", with neural nets untenable. If there is the possibility of needing even one expert in the future for a hypothetical case of a neural net being damaged, if there is even the slightest risk of it, then we will still need experts. Argument over, I'm afraid.

And where are the neural net companies I knew? Finished. They just had far too long a lead time towards a commercial product. It would be 10 years, and that's just their claim, before they'd be at the level of a human expert (and that's doubtful) and by that time, the knowledge of the human experts would have moved on 100 times further. Businesses just can't wait that long.

Another point - as long as a human is refining the logic tree and the initial parameters, there will be errors. A computer science lecturer once told me, "computers are stupid - basically you have to tell them everything they have to do". Even in the case of this neural net, at the most basic level, you are still telling it what to do. It will never be a truly autonomous learner as one neural net will never be able to learn the field of another neural net without changes in its logic trees. Correct? You need to start over with it.

[edit on 29-7-2010 by J.Clear]



posted on Jul, 29 2010 @ 09:34 AM
link   
I should re-iterate - the type of weak result you are saying is most likely now is exactly the type of result I've seen in the past from similar neural net style programs. And that is why they are now out of business. They simply could not go commercial because the result was poor. The ran out of venture capital and that was them done as noone would buy the product. And that is what will happen to your company too without something better than the promise that after many time-consuming iterations on high-powered expensive technology that it will eventually learn enough to be passable.



posted on Jul, 29 2010 @ 05:40 PM
link   
---
Who will decide if a neural net has bad input or has become damaged? I think you've just made your own argument for the replacement of human experts, "putting them out of a job", with neural nets untenable. If there is the possibility of needing even one expert in the future for a hypothetical case of a neural net being damaged, if there is even the slightest risk of it, then we will still need experts. Argument over, I'm afraid.
---

This is a BASIC PROBLEM with ANY learning system be it for a human
OR a computer...HOW does him/her/it get to a specific level of
expertise? ... SOMEONE (a subject matter expert!) MUST create
(as a parent-like entity) and INPUT a set of initial rules
which are then followed and further refined with diligent
guidance or on an autonomous basis for an set period of time.

Neural Nets NEED to be bootstrapped (started from scratch)
and then allowed to gather data and use filters to refine their
"Weighting Factors" which assign a level of importance to
specific if-then-else logic which allows "it" to grade which
rules are the most important ones to follow during any large
scale decision-making process.

We as humans make many thousands of micro-decisions which
create a series of weighting scales that assign a level of importance
to each rule-of-thumb which is then used to make a major
YES/NO/MAYBE macro-decision.

This CAN BE EMULATED in computers using multi-state
boolean logic but SOMEONE MUST start and input the initial
rules of thumb...and those micro-rules NEED to be setup
by a SUBJECT MATTER EXPERT if there is ANY HOPE of making
a marketable and/or peer-level Neural Net.

Regarding this test, you are indeed CORRECT in that this is test
is taking way too long and would be untenable in a commercial
environment....BUT....I must add this is an INITIAL BOOTSTRAP
of a Neural Net that I started myself using general rules of
audio editing and mixing...I'm decent...but I'm not Bob Ludwig!
Plus i should not I did make the mistake of using brute force
DSP filtering algorithms (i.e. High Pass & lo-pass + Notch filters)
and frequency rise-time and falloff trace (aka audio raytracing).
These are computationally EXPENSIVE tasks which could be
eliminated the next time around for a significant boost in
processing speed at the expense of some accuracy.
That said --- I'm still letting the test continue so as to
complete the "Scientific Method" cycle.

You can't learn unless you make mistakes!

The key is to PLEASE allow this to finish...even I DON'T KNOW what
will happen as the final result....it could be terrible...it could be
GREAT! I don't know until it's finished which is currently at
238 hours aggregate time (able to boost using some free
computers!)

My only mistake is that I should have STARTED AND FINISHED
the test BEFORE opening my big fat mouth on ATS!
I was over-confident in the speed of this neural net
so Mea Culpa on that one....What WON'T SAY is Mea Culpa
on its ABILITY to obtain a Professional Result which is STILL
up for debate until its complete!

And YES my spiel HAS turned a bit marketing oriented
but that's because i'm EXCITED by the prospect of a whole
new easy-to-use consumer-level NEURAL NET infrastructure
that can be modified by ANYONE for ANY desired task!

What don't you like about THAT IDEA?

Anyways, NO WORRY about venture capital...right now
money is NOT an issue because I am MORE THAN WILLING
to continue until the ends of the Earth to finish this program!

Again let's keep an open mind...even I'm getting annoyed with
how long its taking....but still...I'm patient....waiting to pounce
when all is quiet...stealthily...with fangs ready to bite through
all the hyperbole and/or disbelief!



posted on Jul, 29 2010 @ 06:43 PM
link   
---
"Even in the case of this neural net, at the most basic level, you are still telling it what to do. It will never be a truly autonomous learner as one neural net will never be able to learn the field of another neural net without changes in its logic trees. Correct? You need to start over with it. "
---

Take a look in the mirror and tell me that someone OR something
aborted your own ability to obtain knowledge from another field
of study or play and integrate that knowledge-base or skillset
into your current personal or work life!

YOU ARE A NEURAL NET!!!...with quadrillions of neurons that
use micro-voltage "logic gates" that FILTER and WEIGHT
thousands of micro-decisions that are then grouped together
into larger Macro-Decision trees which will output a final
YES/NO/MAYBE decision.

What above tasks do you think CANNOT BE EMULATED
on a computer? Each and EVERY single one HAS, in some
form or another, already been put into many modern
computing systems but ONLY NOW are these rule-sets
being aggregated into a holistic and MONOLITHIC infrastructure
that WILL ALLOW the merging of disparate logic trees into
LARGER GROUPS of logic trees that CAN THEN BE APPLIED to
specific or general reasoning and decision-making tasks.

Even a damaged Neural Net can be fixed by either overwriting
a specific series of rules with newer versions OR just REPLACING
the ENTIRE LOGIC TREE with a peer-reviewed one...What's so
hard about that?...It just costs a bit of money to gather all the
needed subject matter experts together to decide on and input
all the Rules of Thumb into the If-Then-Else decision trees that
are used to process bits of data.

I need to make something clear in that a task-specific Neural Net
is simply a set of rules that having the following structure:

If This Data is Within These Range Limits Then
Set Decision Result to YES
else
Set Decision Result to NO
otherwise
if the incoming data is not recognized
or does not follow a peer-reviewed valid format then
Fix the incoming data if we can OR set decision result to
CANNOT PROCESS INCOMING DATA

One creates THOUSANDS or even MILLIONS of these rules
and gives them a percentage-based weight of importance
as to how much each micro-decision affects the overall result.

Each microdecision is then grouped together to form a series
macro-decisions which then are grouped together to give a
final YES/NO/MAYBE result.

An example I like to use is as follows:

1) What is a Stop Sign?

Rules of Thumb:
a) Is there a series of RGB pixels coloured MEDIUM RED
within or near the upper right corner of an incoming image?

b) Are the outer edges of those series of pixels organized
into a shape that is an Octagon?

c) Is there a series of white pixels found within the borders
of the series of found red pixels?

d) Do those white pixels form a shape that is considered text?

e) Does that text say STOP?


For each of the above micro-decision there are attached
a series of Digital Signal Processing (DSP) functions that
are common to computer vision recognition such as edge
detection, Optical Character Shape Recognition, colour
or hue filters, pixel position determination within a bitmap
and a few other simple-to-implement but NECESSARY DSP functions.

Your own visual cortex does this DSP processing MILLIONS of
times a day AND it does it FAST by way of multi-network node
processing using a biologically-based Array Processor!
Your visual cortex breaks up the DSP functions into small
segments that are executed by MANY sets of grouped-together
neurons that give small-sized end-result answers which
are further aggregated and weighted into larger data sets.

If the results of the DSP functions are simple YES or NO answers
we can GROUP together and WEIGHT the importance of each
micro-decision within the overall macro-decision of answering
the question of:

Can the edge detected object in or near the upper right corner
of an incoming image be considered a STOP sign?

If the MAJORITY of the micro-decisions is weighted
and averaged out to be YES, then we can digitally
sign off on a new fact which is YES the portion of the
screen that is red and octagonal in shape containing
the words STOP is indeed a stop sign.

An autonomously driven vehicle could then use that information
to start the braking process so as to bring a car to a stop
just before the newly found STOP sign.

A NEW SERIES of micro-decisions can then be started to allow
the vehicle to continue on its way IF it is found that no pedestrians
are present in the middle of the road and that no other objects are
in the way AND if we find NO OTHER signs that say NOT to proceed.

Combine MANY micro-rules and then combine those into larger
logic trees to allow MANY variable tasks to be input to that
logic tree which can then be further grouped to allow things
such as autonomously flying a plane or driving a car or
even composing music! .... the sky's the limit on what
Neural Nets can do!

---

SO NOW CAN YOU SEE how simple computer rules can allow
complex tasks to be completed by a MACHINE?

There is NO TECHNICAL OBSTACLE for creating and inputting
a series of peer-reviewed rules-of-thumb into a grid-based
network of single or multi-core computers or CPU/GPU/DSP
chips and having those rules-of-thumb initially manually
weighted in terms of importance by that same team of experts
to form a large scale macro-decision-making logic tree
to allow the computerization of human-level decision
making tasks.

There is also NO TECHNICAL OBSTACLE to allowing and inputting
a second set of rules-of-thumb that allows a computer to filter,
add to or take away from the original set of task-specific rules
if those original rules keep failing when applied to known
or new sets of input data. This allows AUTONOMOUS upgrading
of a Neural Net on a self-modifying basis.

And finally there is NO TECHNICAL OBSTACLE to combining
MANY logic trees to perform ever more complex tasks
and ever-greater refining of the original task-specific
logic trees...after a period of time those logic trees
will become so precise and input data-specific that
the total aggregate skill level of the Neural Net will
be EQUAL TO or SURPASS human abilities!!!!!!


[edit on 2010/7/29 by StargateSG7]



posted on Jul, 29 2010 @ 07:47 PM
link   
Ok 2c worth here we go..

When my DVD that cost me $40 only to soon stop working from overuse - am I entitled to another free copy - aparently not - so am I buying the DVD or the movie?
When I go to the big screen am I paying for the big screen or movie?
Most movie downloads are often not in good quality.
Many music CD's contain a list of tracks people have to collectivly buy when they often only want the one or two good songs on the CD.

People are struggling to survive, let alone pay $10-$20 just to see one movie. Those who can do, those who can't download many do both.

The bottom line is people are tired of paying too much and there is one rule when dealing with customers that obviously escapes these industries which is THE CUSTOMER IS ALWAYS RIGHT

What the record companies fail to mention is the amount of money they spend on advertising to get people to talk that could be saved by embracing this technology. When someone downloads a movie they like 2 positive things happen: Firstly the person will tell other people that it is a good movie - this is free advertising, secondly they will want to watch it again in the future and next time they may want to see it on a big screen or in high quality.

I think the record companies etc need to get realistic. Stop fighting the file sharers and do what they do best - take advantage of the situation.
Stop spending so much on advertising and upload free low quality copies of their own movies, if people want high quality they will go to the big screen and pay for it or download the DVD, that will not change.

People pay will always pay for a quality product. They just have to make it worth our time and are upset the bar has been raised.



posted on Jul, 30 2010 @ 03:06 AM
link   

Originally posted by byteshertz
Many music CD's contain a list of tracks people have to collectivly buy when they often only want the one or two good songs on the CD.


I know of at least one shop that offered its customers to make their own CDs by choosing single tracks from various CDs. The shop got shut down by the lawyers of the industry. THATS how dumb the industry is.

[edit on 30-7-2010 by Skyfloating]



posted on Jul, 30 2010 @ 09:05 AM
link   

Originally posted by StargateSG7
And YES my spiel HAS turned a bit marketing oriented
but that's because i'm EXCITED by the prospect of a whole
new easy-to-use consumer-level NEURAL NET infrastructure
that can be modified by ANYONE for ANY desired task!

What don't you like about THAT IDEA?


Noone has a problem with that idea. As an engineer, and like many other engineers, the concept of neural nets makes me really excited. BUT that's nothing new. What I'm saying is that I know lots of engineers excited by it, and lots of engineers and scientists who are working on it, but so far the theory is far more exciting that the practical results and there's nothing that I've seen to suggest the practical results will be exciting any time in the near or middle future. I honestly think there's some yet-to-be-seen leap that will need to be made before this technology does the job you hope it will. I can understand the excitement you have about it, because I know so many people who are equally enthralled and fervent about it, but that fervor can blind you sometimes as how the layman sees this. And that is by looking at his watch while you reassure once again that it will only take a few minutes more and will probably possibly maybe sound ok.

[edit on 30-7-2010 by J.Clear]



posted on Jul, 30 2010 @ 09:22 AM
link   

Originally posted by StargateSG7
YOU ARE A NEURAL NET!!!



Abstraction and inference. Human can imply answers where there are no clear answers and create new rules where there were none before. Actually, you didn't answer at all that part of my last post, about creativity. About how experts not only know the precise rules and what is good/bad right/wrong but have the ability to innovate and create anew, modify rules and write new ones. Experts not only replicate the quality of what has gone before (as you intend your neural net to, and thus replace them); they also create new standards of quality and new fields of excellence, new realms of science and art. They rewrite the book. They bend the rules. Will your neural net break new ground?

The neural net model is extremely attractive to apply to human intelligence, it's a holy grail of sorts because it appears to show us how the human mind makes decisions at a basic level. But I don't buy it. If it was that simple, throwing roomfuls of equipment at it would eventually emulate it. But we do it inside a mass of flesh inside our heads. There has to be more to it than billions of rules. Whether it's self-organising systems or something else, I don't know. But I think there's something we don't see yet that is a missing element in understanding human intelligence and I don't think the current neural net model will ever achieve it.

I'm so tired of the speculating you do in every reply here. You're saying the same thing over and over again in a religious fervor. Enough already. It's pure and utter speculation with absolutely no proof. And as you can see, I'm the only one still replying to you on it. Everyone else grew tired of it ages ago.

[edit on 30-7-2010 by J.Clear]



posted on Jul, 30 2010 @ 09:56 AM
link   
By the way, what's the policy on advertising here?

Though to be honest, these replies are almost anti-advertising, you'd be losing customers from them..



posted on Aug, 2 2010 @ 05:48 AM
link   

Originally posted by J.Clear
By the way, what's the policy on advertising here?

Though to be honest, these replies are almost anti-advertising, you'd be losing customers from them..



---
It's not supposed to be advertising...just a bit blowing my own horn
so to speak...I'm excited by the possibilities of Neural Net processing
that has been simplified and extended into the consumer realm
so I'm not sure what the problem is about my excited statements.

All I can say is that the proof will be in the pudding...eventually!

Even if most ignore my diatribes....one or two will probably have
an interest in my ideas....Normally I would have asked to have
my replies moved to another SEPARATE thread but for
Thread Location purposes it SHOULD stay here for now and
then be moved to it's own thread...I say give it a 3 days or so
and then have the moderators move it to a separate area
named "Autonomous Multimedia Content Creation: Can it be Done?".

No other comment to be made except for an apology as for the length
of time it is taking...The technical name for the method I'm using to parse
a 3D audio sample environment is called Quadtree Sub-division which
is related to the type of subsampling iteratively computed by
Fractal Compression algorithms of still images but applied to
a 3D map of 16-bit audio samples which were calculated to
have a specific position within that 3D environment based upon
changes in echo delays, frequency changes and even changes
in voice timbre to allow me to ESTIMATE the original size and
even shape of the original acoustic environment at a resolution
down to even centimeter scales.

Since many modern recordings have reverb and echo added to
simulate a specific acoustic environment I can either calculate
the size of the simulated environment OR take away the reverb
sound effects by TRYING to estimate what type of effect was
applied and removing them using DSP functions to obtain the
size or dimensions of the ORIGINAL acoustic environment
so that we can do the Qaudtree processing that allows
voices OR instruments to be removed, replaced or transposed
in a more naturalistic manner than what has been done before.

In fact, what my test SHOULD SOUND LIKE is that of an engineer
who simply re-recorded another voice over pre-recorded
background instrumentals. Many of you will say what's the
big deal - Any audio engineer can do that in a few hours?

In this case....it is being done AUTONOMOUSLY....without me
interfering OR doing any work! This will have an affect upon
the current financial structure of the music arts IF multimedia
production can be done AUTONOMOUSLY and QUICKLY!

While the quickly part has not panned out as I had hoped,
I'm not letting that stop me from finishing the test as promised.
I personally cannot accurately predict the number of 3D "Audio-pixel"
blocks that will be computed by the audio ray trace parser (i.e. quadtree
sub-sampler), so I made an average guesstimate of what would
happen during the 44,100 3D cubic environment samples PER SECOND
(i.e. 44,100 x X xY x Z axis of 16 bit audio samples divided into
separated-out frequencies) on a grid-style network of mid-grade
personal computers....That estimate was wrong...and for that
I apologize! BUT...that shouldn't invalidate my idea that the
Scientific Method should be let to run its course.

Whatever those results are I can still use them to dissect what happened
and change the codes to better parse the 3D data.

...again for the moderators...if we need to move this to a separate
thread just do it and U2U me with a new link.

SORRY I don`t want to waste anyone's time or use up space within
a specific unrelated thread...it just came up out of earlier comments
I made many posts ago....so I decided to run a test as to whether
computers can do autonomous multimedia production at such a
quality level that "illegal file sharing" would be rendered moot
or irrelevant in a modern high-tech society full of software
that could create its own content at a click of a mouse button.



posted on Aug, 2 2010 @ 07:47 AM
link   

Originally posted by StargateSG7


---
It's not supposed to be advertising...just a bit blowing my own horn
so to speak...


Or your company's horn. And that's different to advertising how?



All I can say is that the proof will be in the pudding...eventually!


Precisely. And there is no pudding.



Even if most ignore my diatribes....one or two will probably have
an interest in my ideas....


Nope, I don't think anyone is still interested. You've repeated the same post about forty times now, and every time at great length with formatting that makes the post huge.




In fact, what my test SHOULD SOUND LIKE is that of an engineer
who simply re-recorded another voice over pre-recorded
background instrumentals. Many of you will say what's the
big deal - Any audio engineer can do that in a few hours?


I would be immensely impressed if it sounds like this but...


IF



If if if if if. You speculate, again.



posted on Aug, 3 2010 @ 04:24 PM
link   
This is all a moot point. By Now I could have licensed the masters, taken the vocals out, rerecorded the vocals, remixed everything, printed it and started selling it.

I don't know about most engineers, but with one recording room (big enough for drums) and a control room, I can turn out 13 full band tracks in seven to ten days. That includes basic tracking, overdubs, vocals, mixing, touchups, and mastering. In other words I could completely recreate Britteny's album in the time it has taken this computer to almost finish one song.

If somebody has to pay an engineer and IT tech to baby sit the computer for years, and pay real engineers to keep product flowing, it is self defeating. John Henry was replaced because the new technology was more eficient. The technology you put on display is less efective and takes a higher initial output.

How long would it take to pay for it self? Nobody knows how long it would take. Nobody even knows if it will work. Nobody knows when a "bad" rule might be written and the whole process destroyed.

Like I said earlier, it will take a decade for this stuff to even be competitive.



posted on Aug, 4 2010 @ 06:49 AM
link   
What he said.


Let's come back and have this debate again in 10 years time. On Laser-Forums..



posted on Aug, 10 2010 @ 09:02 AM
link   
Interesting report on work into new research as to how the brain works..

www.bbc.co.uk...



posted on Aug, 10 2010 @ 04:44 PM
link   

Originally posted by MikeNice81
This is all a moot point. By Now I could have licensed the masters, taken the vocals out, rerecorded the vocals, remixed everything, printed it and started selling it.

I don't know about most engineers, but with one recording room (big enough for drums) and a control room, I can turn out 13 full band tracks in seven to ten days. That includes basic tracking, overdubs, vocals, mixing, touchups, and mastering. In other words I could completely recreate Britteny's album in the time it has taken this computer to almost finish one song.

If somebody has to pay an engineer and IT tech to baby sit the computer for years, and pay real engineers to keep product flowing, it is self defeating. John Henry was replaced because the new technology was more eficient. The technology you put on display is less efective and takes a higher initial output.

How long would it take to pay for it self? Nobody knows how long it would take. Nobody even knows if it will work. Nobody knows when a "bad" rule might be written and the whole process destroyed.

Like I said earlier, it will take a decade for this stuff to even be competitive.


OK I'm back from doing some "Real Work" up in Northern BC.
I thought my thread would have been moved already but since
it isn't, I continue on...we are at 307 hours total aggregate time
for converting a Britney Spears Song that has been filtered
autonomously by software into SEPARATE BACKGROUND and
VOICE tracks using a rule-based Expert System/Neural Net
AND then transposing the voice track into a male voice
(Basso Profundo) using the same ennunciations and timing
of the original and then recombining the background and
new voice track such that it APPEARS to sound realistic
and non-artificial.

---

Since I must obviously concede that time-wise, many will
think this test can't cut the mustard in terms of autonomous
digital voice production I still think PARTS of this test will
have a useful demonstration component in that at the very
least, it will show that it COULD ACTUALLY be done!

This means I have to EAT CROW on terms of my original time
estimate...but...I'll let the original test continue to whatever end,
but I've already made tweaks to the database rules so that specific
processing is completed only WHEN NECESSARY (which is more
complex and time-consuming to program but is DEFINITELY
FASTER!) rather than do a brute force attack of DSP filtering
of what is voice and what is background.

All I can do is let the current test run its course till its done
and then see what the final result is....and then run
the NEW CODE to compare the two results. NOW THAT WOULD
BE A TRULY MEANINGFUL RESEARCH PROJECT!

Again for the moderators, I say we SHOULD MOVE this thread
to a separate thread called:

Autonomous Multimedia Content Creation: Can It Be Done?

so we don't muck up this filesharing thread
with my rather long posts.

P.S. Just so people know --- In My Book, The Scientific Method
REQUIRES that ALL results be completed and disclosed so that
FUTURE tests can be compared to the original one.

This means that EVEN IF I HAVE TO EAT SOME CROW, I STILL
have an obligation to COMPLETE and DISCLOSE my results
however good or bad they may appear to be so that OTHERS
may come to their OWN conclusions as to what are GOOD or BAD
quality results. Just because it takes some extra time to finish
doesn't mean my test is INVALID, it just means it's taking a lot
longer than I thought it would....which in itself can be considered
a result! i.e. evaluation of estimation of Time To Process!



posted on Aug, 11 2010 @ 02:20 AM
link   

Originally posted by StargateSG7
NOW THAT WOULD
BE A TRULY MEANINGFUL RESEARCH PROJECT!


Yes, and research projects are the only place this technology is at right now. Definitely, and perhaps definitively, not commercial.



Autonomous Multimedia Content Creation: Can It Be Done?


I wonder though, I don't think this is a test of media creation. This is alteration. You'd have to devise another test to test creation I think, like: "Create a pop song from scratch". Let's see a neural net that can do that...



posted on Aug, 21 2010 @ 11:09 AM
link   
Hmm, 9 days. I guess we will have to discuss this further on those laser-forums of the future....

On a side note, I just finished a new album collaboration in 9 days. It's not up to Britney supergloss production standards, but it sounds pretty good. Composed, recorded, mixed and produced (record sleeves, posters and golf tees with each release - long story) in 9 days.



posted on Aug, 23 2010 @ 12:23 AM
link   
It has been a few weeks and I must apologize for not getting back
to this forum a bit earlier due to some "Work Related" issues,
but as discussed, I can NOW detail the COMPLETION of the
actual AUTONOMOUS processing of a Britney Spears song
using a rule-based expert system (i.e. an application-specific
Neural Net) that converts a female voice into a male voice
(i.e. Basso Profundo) using a brute force DSP (Digital Signal
Processing) methodology that uses hundreds of thousands
of rules-of-thumb that create and apply many notch filters
(for you audio buffs!) to separate out specific frequencies
into bands that allow instruments and voice material to be
separated out into distinct audio tracks which will be
later replaced and remixed into a new audio master file.

A series of secondary rules-of-thumb allow room-acoustics
and sizing to be determined and estimated so that reverb
and/or echo can be removed (or added!) to allow tertiary
rules-of-thumb to TIME and estimate specific inflections,
intonations and other technicalities of the voice/singing
parts of a song.

A fourth dimensional array of sub-rules then allows
an incoming/input "voice print" to be taken apart and
re-timed & resampled to FIT and match the specific timings
of the original voice track. Other sub-rules then use
data from a 3-dimensional physics-based acoustic model
of a male human vocal tract to "Pitch-Shift" in a NATURALISTIC
manner, the replacement voice track which contains
spoken word lyrics (i.e. a sampled male voice) which
are "stretched" and "re-pitched" into a singing-like
vocalization that tries to match the original timings
and inflections of the source voice track.

There are numerous variables that go into such audio
processing which requires techniques such as "Audio Raytracing",
3D Quadtree sub-sampling of 3D acoustic environments,
advanced pitch-based and timing-based notch filtering
and parametric frequency separation...and not to mention
DSP-based audio waveform analysis and neural-net
rules-of-thumb database management and rules-of-thumb
application to specific audio samples and groups-of-samples.

The total amount of data samples was on the order of
TWO PETABYTES.....Think about that number!!!!

That's over 2000 TERABYTES of data going over a limited-bandwidth
grid-based network running on 12 nodes of 2003 era
Windows 2000/Windows XP AMD Athlon single-core and
a few dual-core AMD processors. While this network is
not part of our advanced latest & greatest hexacore computer
network system, it is definitely powerful enough for a test of
this type.

At 502 HOURS of total aggregate CPU processing time
(i.e. NOT LINEAR TIME) this was an ENORMOUS processing job
that gave credence to the validity and ABILITY of gid-based
networking to accomplish tasks of almost any kind!

This test was an application of a specific series of rules
designed for audio processing. This does NOT preclude
the neural net from being used in VIDEO processing,
math & engineering tasks, vision recognition, optical
character recognition, context-based text
search & categorization and any number of other
tasks usually relegated to humans.

The limitations of ANY neural net are the initial BREADTH
and accuracy of the rules-of-thumb used to make the many
micro and macro decisions which allow computers to match
and/or exceed the performance of humans in critical
decision-making tasks which can include the autonomous
creation of high quality multimedia content.

So in the next post I outline my results:

Please read below......



new topics

top topics



 
91
<< 27  28  29    31  32 >>

log in

join