Searching \ for 'Sound source angle detection' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: piclist.com/techref/io/audio.htm?key=sound
Search entire site for: 'Sound source angle detection'.

Truncated match.
PICList Thread
'Sound source angle detection'
1999\09\16@042135 by Michael Rigby-Jones

flavicon
face
> We're working at a project that uses 2 sound sensors do detect the angle
> to the sound source,like a load clap.The idea is to calculate(pic16f84??)
> this angle
> based on the time delay of the two sensors(placed about 10cm apart).
>
> I'm thankfull for any ideas, but i'm especially interrested in how-to-make
> the
> sound sensors.
>
> [nOs]
>
You don't have to make them...use microphones :o)  Seriously it depends on
what type of sounds you want to detect.  Will it be any loud sound?  If
that's the case you will need the microphone, an active recitifier to
amplify and rectify the AC signal from the microphone, and a comparator to
give a clean logic signal to the PIC.

A possible problem to watch out for is the possibility of reflections
upsetting the calculations.

Regards

Mike Rigby-Jones

1999\09\16@052107 by Dag Bakken

flavicon
face
MRJ> A possible problem to watch out for is the possibility of reflections
MRJ> upsetting the calculations.

That should'nt be a problem since a reflection would typically appear
later than the time the sound uses from one mic to the other.  The
hazard being of course when the reflection comes from something closer
to one mic than the distanse between them (the mics).

-DS

1999\09\16@052931 by Lynx {Glenn Jones}

flavicon
face
I have a question about this whole concept. Are you detecting sounds which
are a fixed distance away, in this case i have no problem. However, how
will you be able to detect the angle of the source when you have no
distance information?

------------------------------------------------------------------------------
A member of the PI-100 Club:
3.1415926535897932384626433832795028841971693993751
058209749445923078164062862089986280348253421170679

On Thu, 16 Sep 1999, Dag Bakken wrote:

{Quote hidden}

1999\09\16@060119 by Dag Bakken

flavicon
face
LGJ> I have a question about this whole concept. Are you detecting
LGJ> sounds which are a fixed distance away, in this case i have no
LGJ> problem. However, how will you be able to detect the angle of the
LGJ> source when you have no distance information?

Well...  An angle/direction doesn't need distance information.
The parameter that produces a center reading will be when there are no
time-difference between left and right mic.  This is of course true
regardless of distance.  The far left will be when the left mic reads
the sound first, and the right mic read the same sound after the time
it took the sound to travel.  If the sound is somewhere in the
front-left area, the sound will reach the right mic slightly earlier
than when the source is at the far left.

-DS


MRJ> A possible problem to watch out for is the possibility of reflections
MRJ> upsetting the calculations.

>> That should'nt be a problem since a reflection would typically appear
>> later than the time the sound uses from one mic to the other.  The
>> hazard being of course when the reflection comes from something closer
>> to one mic than the distanse between them (the mics).

1999\09\16@063923 by John Hallam

flavicon
picon face
On Thu, 16 Sep 1999, Dag Bakken wrote:

> LGJ> I have a question about this whole concept. Are you detecting
> LGJ> sounds which are a fixed distance away, in this case i have no
> LGJ> problem. However, how will you be able to detect the angle of the
> LGJ> source when you have no distance information?
>
> Well...  An angle/direction doesn't need distance information.
> [ ... snip ... ]

       This is almost right.  The time difference measured at the
microphones is directly related to the difference in path length between
source -> left microphone and source -> right microphone.  In 2
dimensions, the set of points whose distances from two fixed points (foci)
differ by a constant is a hyperbola;  in 3 dimensions, the surface you
need results from spinning the hyperbola around the axis of the sensor
system.

       What people normally do is (a) ignore elevation and assume the
sound source is in the azimuthal plane and (b) assume the source is far
enough away that the hyperbola looks like a straight line.  For many
applications these two approximations, while strictly false, are perfectly
acceptable.

John Hallam.
Senior Lecturer, School of Artificial Intelligence,
Division of Informatics, University of Edinburgh,
Scotland.

1999\09\16@064341 by Lynx {Glenn Jones}

flavicon
face
Thank you for the clarification. it was the Hyperbolic nature that was
getting me, but your right that hyperbolas look pretty straight for this
application.

------------------------------------------------------------------------------
A member of the PI-100 Club:
3.1415926535897932384626433832795028841971693993751
058209749445923078164062862089986280348253421170679

On Thu, 16 Sep 1999, John Hallam wrote:

{Quote hidden}

1999\09\16@072401 by Dag Bakken

flavicon
face
>> LGJ> I have a question about this whole concept. Are you detecting
>> LGJ> sounds which are a fixed distance away, in this case i have no
>> LGJ> problem. However, how will you be able to detect the angle of the
>> LGJ> source when you have no distance information?
>>
>> Well...  An angle/direction doesn't need distance information.
>> [ ... snip ... ]

JH>         This is almost right.  The time difference measured at the
JH> microphones is directly related to the difference in path length
JH> between source ->> left microphone and source -> right microphone.
JH>  In 2 dimensions, the set of points whose distances from two fixed
JH> points (foci) differ by a constant is a hyperbola;  in 3
JH> dimensions, the surface you need results from spinning the
JH> hyperbola around the axis of the sensor system.

Yes.  So what you should do is measure elevation, right?  What I
explained in a (rather long) mail earlier today, was how the human
brain/ears deals with front/back information.  The same principle is
used for elevation.  I know that that approach is kind of far fetched
for a controller, but it works.  So...if that's implemented, you would
effectively measure 3D - even though I think nothing less than AI
should try it with that method.

-DS

1999\09\16@121917 by Wagner Lipnharski

picon face
Suppose you can use two or more different tone frequencies at the
emitter.

In a direct sound wave, air mass can change the intensity level of some
frequencies, based on the air sound transportability.

Your receiver can identify this differences in a direct wave and store
it in a non volatile memory.

Different objects and surfaces will absorb and reflect different
percentage of sound intensity. Only air can be compared to air.  I
believe no other object can be confused with air sound
transportability.  Even other gases have different transportability
curves.

If your receiver can compare those frequencies comparison received
levels with the numbers stored from direct wave, it will be no only easy
to identify a reflected wave, but also identify the reflector material.

Suppose you install an obstacle (pole) exactly in the middle of the
direct wave between the transmitter and receiver. The sound wave is a
concentric wave that fight against itself resulting in a forward
movement. The lack of wave after the pole will allow the wave to fill
the gap and rebuild after the pole, of course with much less intensity
than the direct wave.  If you have a reflector to reflect waves to the
receiver, this waves will reach the receiver stronger than the rebuilt
direct waves.  Your receiver could discriminate between both waves. The
rebuilt wave will arrive at the receiver sooner than the reflected but
with much less intensity.  By this way it is possible to measure the
shortest *air distance* between the origin and destination and the path
via reflective waves.

Remember that air still a transport vehicle to sound, and a non direct
view path will always generate reflected or rebuilt waves, what could
not be a direct line between origin and destination.

A stereo rotational head can identify also the reflected angle by
measuring the proportional distances between the angle of the rotational
head.  The comparison between the reflected and rebuilt waves can also
helps to identify not only the reflector angle, but positional space of
it.

In real we do it all the time, and it is widely explored in surround
systems, to create the illusion of 3 dimensional deep and distances.

You can make this experiment;  seat in a rotating chair, close your eyes
and rotate the chair slowly. Keep a radio on at the other side of the
room and ask somebody to walk in between you and the radio. Even with
the chair rotating, you will be 100% able to not only identify when the
person is obstructing the direct wave, but also guess the person
distance from you, all based on reflected and rebuilt waves, echoes and
different sound intensity levels for different frequencies.

Wagner.

1999\09\16@122357 by Harold M Hallikainen

picon face
On Thu, 16 Sep 1999 11:38:33 +0100 John Hallam <spam_OUTjohnTakeThisOuTspamDAI.ED.AC.UK>
writes:

>
>        This is almost right.  The time difference measured at the
>microphones is directly related to the difference in path length
>between
>source -> left microphone and source -> right microphone.  In 2
>dimensions, the set of points whose distances from two fixed points
>(foci)
>differ by a constant is a hyperbola;  in 3 dimensions, the surface you
>need results from spinning the hyperbola around the axis of the sensor
>system.
>
>        What people normally do is (a) ignore elevation and assume the
>sound source is in the azimuthal plane and (b) assume the source is
>far
>enough away that the hyperbola looks like a straight line.  For many
>applications these two approximations, while strictly false, are
>perfectly
>acceptable.
>

       Of course this hyperbola (just like the ones drawn on marine
charts for LORAN navigation) go both in front and behind the person doing
the listening.  I've always wondered how we can tell if a sound is in
front of us or behind us.  I have two theories.  1.  Due to the somewhat
directional nature of our ears (they "point" forward), sounds behind us
will have more reverberation than sounds in front of us, since there is
more "gain" towards any reflector in front of us than towards the souce
behind us.  2.  On hearing a sound, we turn our head slightly.  If we
turn our head to the left and the sound got closer to the right ear (as
determined by variation in time delay), the sound is in front of us.  If
it got farther from our right ear, the sound is behind us.
       I suspect that method 2 is the most probable.  This would,
however, make binaural headphones not sound natural, since the "image"
would move when we turned our head.  If binaural headphones had a
direction sensor that fed back to a DSP of some sort that would insert
the appropriate delays as we turned our head, we could perhaps get close
to realistic directional effects in sound.

Harold



Harold Hallikainen
.....haroldKILLspamspam@spam@hallikainen.com
Hallikainen & Friends, Inc.
See the FCC Rules at http://hallikainen.com/FccRules and comments filed
in LPFM proceeding at http://hallikainen.com/lpfm

___________________________________________________________________
Get the Internet just the way you want it.
Free software, free e-mail, and free Internet access for a month!
Try Juno Web: dl.http://www.juno.com/dynoget/tagj.

1999\09\16@134056 by Robert A. LaBudde

flavicon
face
At 12:20 PM 9/16/99 -0400, Harold wrote:
>On Thu, 16 Sep 1999 11:38:33 +0100 John Hallam <johnspamKILLspamDAI.ED.AC.UK>
>>        This is almost right.  The time difference measured at the
>>microphones is directly related to the difference in path length
>>between
>>source -> left microphone and source -> right microphone.  In 2
>>dimensions, the set of points whose distances from two fixed points
>>(foci)
>>differ by a constant is a hyperbola;  in 3 dimensions, the surface you
>        Of course this hyperbola (just like the ones drawn on marine
>charts for LORAN navigation) go both in front and behind the person doing
>the listening.  I've always wondered how we can tell if a sound is in

It is my understanding that a sound generated from a single point would
propagate by spherical (not hyperbolic) waves, which can be approximated by
plane waves at long distances from the source.

Since the speed of sound is nominally 340 m/s (1087 ft/s), you will have to
time the differential pulse detection quite accurately to get reasonable
accuracy from microphone arrays that are not spaced very far apart.

================================================================
Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: .....ralKILLspamspam.....lcfltd.com
Least Cost Formulations, Ltd.                   URL: http://lcfltd.com/
824 Timberlake Drive                            Tel: 757-467-0954
Virginia Beach, VA 23464-3239                   Fax: 757-467-2947

"Vere scire est per causae scire"
================================================================

1999\09\16@214012 by W. Sierke

picon face
From: Harold M Hallikainen <EraseMEharoldhallikainenspam_OUTspamTakeThisOuTJUNO.COM>


>         Of course this hyperbola (just like the ones drawn on marine
> charts for LORAN navigation) go both in front and behind the person doing
> the listening.  I've always wondered how we can tell if a sound is in
> front of us or behind us.  I have two theories.  1.  Due to the somewhat
> directional nature of our ears (they "point" forward), sounds behind us
> will have more reverberation than sounds in front of us, since there is
> more "gain" towards any reflector in front of us than towards the souce
> behind us.  2.  On hearing a sound, we turn our head slightly.  If we

I seem to recall being moderately impressed with an Aureal A3D-based sound
card. One of their 3-D demos had a helicopter flying around you in a
circular path. From recollection, as well as a pretty good front/rear
impression, there was also some degree of altitude discernible, although
this may have been most noticeable when the altitude was changing. (This was
using stereo headphones, although the card also supported the use of
front/rear speakers. I don't have good enough speakers to try it out with.)


Wayne

1999\09\17@023903 by Dag Bakken

flavicon
face
All direction of sound is mainly a product of two part.  The most
important one is learning.  It has been tested that babies do not have
the same stereo perseption as those a bit older.  What they hear is
more like two (similar) sounds.  But, they learn oterwise.  It's the
same thing with direction, even though the mechanism is different.
The brain learns how your ear picks up sound from different angles by
learning how it should really sound, them how it actually sounds when
placed above or behind or whatever.  When the hearing mechanism has
learned how it works, it can make assumptions when it hears a sound it
hasn't heard before.  That assumption will (most likely) not be as
accurate, but close.
Your ear actually changes the frequency spectrum of sound depending on
direction.  This is why an ear is shaped in the totally asymetric way
it is.  But since no ear are the same, it imposes quite a challenge
for those recording Q-sound.  No two people will hear the directions
in a Q-sound recording the same way since their ears does not function
excatly the same way, and may have learned slightly differently from
what the Q-sound system is trying to reproduce.

-DS

HMH> the listening.  I've always wondered how we can tell if a sound is in
HMH> front of us or behind us.  I have two theories.  1.  Due to the somewhat
HMH> directional nature of our ears (they "point" forward), sounds behind us
HMH> will have more reverberation than sounds in front of us, since there is
HMH> more "gain" towards any reflector in front of us than towards the souce
HMH> behind us.  2.  On hearing a sound, we turn our head slightly.  If we
HMH> turn our head to the left and the sound got closer to the right ear (as
HMH> determined by variation in time delay), the sound is in front of us.  If
HMH> it got farther from our right ear, the sound is behind us.
HMH>         I suspect that method 2 is the most probable.  This would,
HMH> however, make binaural headphones not sound natural, since the "image"
HMH> would move when we turned our head.  If binaural headphones had a
HMH> direction sensor that fed back to a DSP of some sort that would insert
HMH> the appropriate delays as we turned our head, we could perhaps get close
HMH> to realistic directional effects in sound.

1999\09\17@074248 by Nils Olav SelŒsdal

flavicon
face
charset="iso-8859-1"

>We're working at a project that uses 2 sound sensors do detect the angle
>to the sound source,like a load clap.The idea is to calculate(pic16f84??)
this angle
>based on the time delay of the two sensors(placed about 10cm apart).

>I'm thankfull for any ideas, but i'm especially interrested in how-to-make
the
>sound sensors.

we've gotten a bit further now, it wasn't all that bad.
Yet it can't determin whether the sound is coming from the back or front.

The sensors are placed 10cm apart, resolution of 9 degrees gives us about
70 instructions to detect the delay on a 10MHz 16f84.(a little optimizing
will
hopefully give us a resolution of 4.5 degrees.)
Our method requires very little calculating. Just that every delay of
whole 0.1/(340*10) are measured ,1 delay=9degrees!
And as we use a resolution of 9 degrees, the failure
CAN be big at great distances.

[nOs]

1999\09\17@132326 by John Hallam

flavicon
picon face
A further thought on the sound direction discussion:  why use only two
microphones?  If you want to discriminate azimuth and elevation a 3
microphone setup is simple and sufficient.  You can also tell front/back
with it by tilting the sensor plane, I think.  (Or you can rely on
the directional response of the microphones to reject sound from
behind the sensor.) Biology is not always best...

John Hallam,
School of Artificial Intelligence
University of Edinburgh

1999\09\17@132338 by Harold M Hallikainen

picon face
On Thu, 16 Sep 1999 13:39:35 -0400 "Robert A. LaBudde" <ralspamspam_OUTLCFLTD.COM>
writes:

>
>It is my understanding that a sound generated from a single point
>would
>propagate by spherical (not hyperbolic) waves, which can be
>approximated by
>plane waves at long distances from the source.
>

       True, I'd expect the sound to travel out in a sphere.  However,
the curve you get where the DIFFERENCE in distance to two points is a
constant is a hyperbola.  Let's see if I can remember it...  I think it's

       1 = x^2/a^2 - y^2/b^2

       You get various conic sections by messing with this.  The minus
makes it a hyperbola.  Change it to a plus and you get an ellipse where a
and b are half the length of the axis.  Make them equal and they are the
radius of the circle.

Harold


Harold Hallikainen
@spam@haroldKILLspamspamhallikainen.com
Hallikainen & Friends, Inc.
See the FCC Rules at http://hallikainen.com/FccRules and comments filed
in LPFM proceeding at http://hallikainen.com/lpfm


___________________________________________________________________
Get the Internet just the way you want it.
Free software, free e-mail, and free Internet access for a month!
Try Juno Web: dl.http://www.juno.com/dynoget/tagj.

1999\09\17@183719 by Erik Reikes

flavicon
face
At 12:53 PM 9/17/99 -0400, you wrote:
{Quote hidden}

Seems to me that givena reasonably sensitive receive circuit all you would
have to do would be to have a directional detector, and slew it around on
some kind of gimbal....

-Erik Reikes


{Quote hidden}

1999\09\18@122447 by Robert A. LaBudde

flavicon
face
At 03:38 PM 9/17/99 -0700, Eric Reikes wrote:
>>True, I'd expect the sound to travel out in a sphere.  However,
>>the curve you get where the DIFFERENCE in distance to two points is a
>>constant is a hyperbola.  Let's see if I can remember it...  I think it's
>>
>>        1 = x^2/a^2 - y^2/b^2
>>
>>        You get various conic sections by messing with this.  The minus
>>makes it a hyperbola.  Change it to a plus and you get an ellipse where a
>>and b are half the length of the axis.  Make them equal and they are the
>>radius of the circle.
>>
>
>Seems to me that givena reasonably sensitive receive circuit all you would
>have to do would be to have a directional detector, and slew it around on
>some kind of gimbal....

For some reason, I didn't get Harold's response to my post.

The hyperbola is apparently the locus in the plane of all aliased points
that would have given the same time delay at the dipole of microphone
receivers.

Obviously 2 mikes will not provide an accurate position in the plane, or
even an angle. You need 3 mikes to do this. (This gives two independent
timing intervals, which gives a single point locus.) With two intervals,
the ability to time both (which may be simultaneous) accuracy is a
performance issue.

================================================================
Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: spamBeGoneralspamBeGonespamlcfltd.com
Least Cost Formulations, Ltd.                   URL: http://lcfltd.com/
824 Timberlake Drive                            Tel: 757-467-0954
Virginia Beach, VA 23464-3239                   Fax: 757-467-2947

"Vere scire est per causae scire"
================================================================

1999\09\18@131141 by Sean H. Breheny

face picon face
It seems to me that this situation is really analogous to light
interference (or,perhaps closer in this case, arrays of RF receiving
antennas). You can divide the situation into two regimes: the near field
and the far field.

In the near field, what you are saying is correct, the difference in path
length is strongly linked to BOTH distance AND angle,so you cannot spearate
the two and extract angle information with only two receivers(see note below).

However,in the far field, the dependence on angle remains and the
dependence on distance gets much less. If you keep the distance between the
two mikes less than one wavelength, then there will be a one-to-one
correspondence between physical angle and received phase difference,almost
independent of changes in distance.

If you are looking to measure sounds in the 1kHz range,this seems practical
to me. However,it WOULD be difficult to do it for,say,20kHz,because your
mike separation would have to be only about 1 cm,so the width of the mike
would play a role in the phasing.

It seems to me that someone else essentially said the same thing by saying
that the hyperbola beomes flatter as you go farther out. Why was the idea
dropped?

NOTE: It seems to me that you might be able to play a trick and make this
work in the near field,too. If you could determine the distance from the
source to the mikes, you could extract the angle even in the near field.
Since the sound level drops off with distance by a known relationship, and
since you know the DIFFERENCE in distance (from the difference in received
phase),you should be able to solve for the actual distance by looking at
the difference in received signal strength between the two mikes). Even
this would break down,though,if there were objects between source and
mikes,OR if you brought the source so close that you were now in ITS near
field,so that you no longer have the 1/r^2 dependence on sound level.

Sean

At 12:23 PM 9/18/99 -0400, you wrote:
{Quote hidden}

| Sean Breheny
| Amateur Radio Callsign: KA3YXM
| Electrical Engineering Student
\--------------=----------------
Save lives, please look at http://www.all.org
Personal page: http://www.people.cornell.edu/pages/shb7
RemoveMEshb7spamTakeThisOuTcornell.edu ICQ #: 3329174

1999\09\19@115058 by Wagner Lipnharski

picon face
> All direction of sound is mainly a product of two part.  The most
> important one is learning.  It has been tested that babies do not have
> the same stereo perseption as those a bit older.  What they hear is
> more like two (similar) sounds.  But, they learn oterwise.  It's the
> same thing with direction, even though the mechanism is different.
> The brain learns how your ear picks up sound from different angles by
> learning how it should really sound, them how it actually sounds when
> placed above or behind or whatever.  When the hearing mechanism has
> learned how it works, it can make assumptions when it hears a sound it
> hasn't heard before.  That assumption will (most likely) not be as
> accurate, but close.
> Your ear actually changes the frequency spectrum of sound depending on
> direction.  This is why an ear is shaped in the totally asymetric way
> it is.  But since no ear are the same, it imposes quite a challenge
> for those recording Q-sound.  No two people will hear the directions
> in a Q-sound recording the same way since their ears does not function
> excatly the same way, and may have learned slightly differently from
> what the Q-sound system is trying to reproduce.

Two points:

a) Learning is such a marvelous thing, even blind babies that can not
recognize the visual origin of the sound, so it is much more difficult
to create a "audible spacial map", but they do and do it very well.

b) Our hearing system can not be compared to two simple and stupid
microphones, since we use several frequency discriminators in earh hear.
Several sound nerve sensors are located in series along the spiral
audible sensor. As far the sound enters the system, only high
frequencies still going ahead so the high pitch nerves sensors are
located in the inner of the spiral.  In real all sensors are equal, and
it is a brain activity to "learn" do discriminate what means the
electric signal from each sensor (meanning different frequencies). By
the same way that a baby needs to learn how to control a mechanical
motion, *probably* he also needs to learn how to discriminate different
frequencies.  I question about the time required to a baby to learn how
to control his arm movements, if it is not delayed by the fact that his
visual focus system is also not improved.

If you want to *try* to duplicate the human hearing system, start by
instal at least 10 microphones 5mm apart, in each "robot's ear", with
frequency band filters, digitize those 20 analog signals, and feed a
powerful processor that will create a spectrum-spacial map based on
phase, frequency and level.

We also hear by bone sound transport, that feeds directly the sensors...
try to do this:  Close your both ears with your fingers, close your
eyes, semi-rotate your head in front of your noisy computer fan, you
will still able to identify the origin of the sound without the common
aerial sound transmission. This just eliminate some of the theories
about reflection and phase shift in the ear muscles and in the ear
external canal.  If the low frequency generated by the finger muscles is
not allowing you to hear the computer fan noise, try to use any external
sound block as rubber or swimming ear plugs instead your fingers.

--------------------------------------------------------
Wagner Lipnharski - UST Research Inc. - Orlando, Florida
Forum and microcontroller web site:  http://www.ustr.net
Microcontrollers Survey:  http://www.ustr.net/tellme.htm

1999\09\19@135227 by M. F. LaBoo

flavicon
face
From: Wagner Lipnharski <wagnerlEraseMEspam.....EARTHLINK.NET>
Subject: Re: Sound source angle detection


> If you want to *try* to duplicate the human hearing system, start by
> instal at least 10 microphones 5mm apart, in each "robot's ear", with
> frequency band filters, digitize those 20 analog signals, and feed a
> powerful processor that will create a spectrum-spacial map based on
> phase, frequency and level...

This was a very interesting and informative post, and it raises an issue.
See how this project has ramped way up from its original humble beginnings?
Lord a'mighty, to think we started out with two microphones and a li'l bitty
algorithm!  ;-}

When we set out to emulate some behavior or function of a living "system"
it's not always easy to decide whether to try to replicate the organic
mechanism as closely  as possible, vs. designing the function from scratch,
optimized to hardware and software rather than brains and body.

I always think of how we could've been flying two thousand years ago if we
hadn't insisted on trying to mimic a bird's flapping wings.  Bamboo and silk
will make a very flyable Rogallo kite with not a single moving part.
Actually, it'd be kind of fun to see how well you could do with just *three*
microphones and a *medium size* algorithm...

1999\09\20@031437 by Dag Bakken

flavicon
face
WL> If you want to *try* to duplicate the human hearing system, start by
WL> instal at least 10 microphones 5mm apart, in each "robot's ear", with
WL> frequency band filters, digitize those 20 analog signals, and feed a
WL> powerful processor that will create a spectrum-spacial map based on
WL> phase, frequency and level.

Actually - not trying to duplicate the human ear-system, only pointing
out how it decodes directions.  Duplicating the mechanism would be a
hardware job like nothing else.  Decoding the frequency spectrum is
more of a mathematical job, and actually kind of doable.

-DS

Who's General Failure and why's he reading my disk?

More... (looser matching)
- Last day of these posts
- In 1999 , 2000 only
- Today
- New search...