Truncated match.
PICList
Thread
'Binary to ASCII'
1999\09\05@083347
by
Steve Thackery
|
I have a challenge. I need to convert a 48-bit binary number (stored in 6
consecutive memory registers) into an ASCII string (i.e. '0' to '9') which
will then get transmitted to a PC. The string will be up to 15 characters
long.
I can do this by going first to BCD and then from BCD to ASCII (although
this may well not be the best way). Microchip have published "Binary 16
to BCD" code in AN526, and I also have "B24toBCD" which is just an
extended version of the 16-bit program. It looks straightforward to
extend it further to 48 bits.
The thing is, I'd simply be working "parrot fashion". I can't suss out
how the BCD-to-ASCII algorithm works!
Can anyone explain in plain English the basic principle?
Also, with a bit of luck that might allow me to work out how to avoid the
intermediate BCD stage altogether and go straight from binary to ASCII.
Or has anyone done that already?
Thanks,
Steve Thackery
Suffolk, England.
Web Site: http://www.btinternet.com/~stevethack/
"Having children is hereditary. If your parents didn't have any, neither
will you." - Henry Morgan
1999\09\05@121920
by
Anne Ogborn
|
The BCD to ASCII approach is a reasonable one.
ASCII encodes characters as numbers from 0 - 255.
Acutally, only the first 128 positions are in the standard
ASCII set. The upper 128 are not well standardized, some of the
competing standards being DOS-ASCII, extended ASCII, and ISO Latin-1.
The first 32 (numbers 0 - 31) are 'control characters',
most of which are rarely used. Some commonly used ones are
0x09 tab
0x0A line feed
0x0D carriage return
0x00 has become special because it is the end of string marker for C and C++
string handling. Hence embedding a nul in a message is a sure way to
be unpopular.
0x20 is space.
The digits are encoded starting with 0x30 = '0' . This makes conversion very sim
ple -
just add 0x30 to a number from 0 - 9 and you have the character that represents
it.
Now, BCD is an old format for representing numbers by powers of 10.
Originally it was 4 bits packed in the upper nybble and 4 bits packed in the
lower nybble, but you have to be perverse to do that anymore. Nowadays it's just
4 bits/byte, with the values garaunteed to be 0-9
So, take your binary - BCD converter's output, unpack it if necessary, and just
add 0x30 to it.
Here's a table of the entire ascii set
http://members.tripod.com/~plangford/index.html
--
Anniepoo
Need loco motors?
http://www.idiom.com/~anniepoo/depot/motors.html
1999\09\05@140719
by
Jim Ham
|
One solution to this problem is a stack orentated machine leads to one of
the most elegant methods using recursion that I have ever seen. Lets see if
I can remember how it goes...
// provide a place to keep the dividend
static int48 accumulator ;
// this is a routine that divides the 48 bit integer pointed to by "pacc"
by "divisor".
// It leaves the quotent in place of the dividend and returns the remainder
char * div48( int48* pacc, char divisor ) ;
void print48bit( int48 number ) {
accumulator = number ;
if (accumulator < 0 ) {
putch( '-' ) ;
accumululator *= -1 ;
}
doprint() ;
return ;
void doprint( void ) {
char remainder ;
remainder = div48( &accumulater, 10 ) ;
if ( accumulator ) doprint() ;
putch( remainder + '0' ) ;
}
I just remembered where I saw this - K&R 2nd Ed. p87. My rendition is a
little different, but
pretty much the same. This version uses an automatic so you only have to
divide once. Their routine
pushes the 48 bit number on the stack, uses two divides and one modulus.
---> disclamer: below not tested - probably full of typos!! <--------
I would just unroll this technique for the pic. It does require some
storage. In the pic we have to
use linear storage since we don't have a stack. I can't think of any way
around this as the printable
number come out of the calculation backwards. (OK, OK, I'm waiting for the
69 postings to show me
I'm wrong...:-> ).
Requirements:
6 bytes for the 48 bit accumulator
17 bytes for the null terminated result string ( 15 digits + minus sign +
null )
2 bits for flags
a routine "div48" that divides the accumulator by 10 and returns the
remainder in W. "div48" also
maintains a zero flag which is only set when the accumulator is zero.
a routine "neg48" that takes the twos compliment of the number in the
accumulator.
result RES 17 ;
accumulator RES 6 ;
flags RES 1 ;
zero48 equ 0 ; ; the accumulator is zero when set, maintained by div48
needminus equ 1 ; ; flag that we need to print a minus sign
;
; make a 48bit number into a printable string.
; Initial conditions: 48 bit integer loaded in the accumulator
; On return:
; FSR points a first character of null terminated string
; W contains the number of characters in the string
;
print48bit
; point to the end of the string area
movlw result+16 ; point after the last char
movwf FSR ;
clrf IND0 ; make the terminating null
bcf flags, needminus
btfss accumulator+5, 8 ; check for negative number
goto loophere
call negate48 ; ; this could be in-line
bsf flags, needminus
loophere
call div48 ; divide by 10, get the remainder in W, set the
zero flag
addlw '0' ; convert to ascii
decf FSR, F
movwf IND0 ; store and move pointer
btfss flags, zero48 ; check if accum is zero
goto loophere ; not zero yet, do the next character
btfss flags, needminus
goto done
movlw '-' ; ; put in the negative sign and move the
pointer
decf FSR, F
movwf IND0 ;
done
movf FSR, W
sublw result+16 ; this may be backwards, I can never rem
ember which way the
; subtract works
return ;
At 01:31 PM 9/5/99 +0100, you wrote:
{Quote hidden}>I have a challenge. I need to convert a 48-bit binary number (stored in 6
>consecutive memory registers) into an ASCII string (i.e. '0' to '9') which
>will then get transmitted to a PC. The string will be up to 15 characters
>long.
>
>I can do this by going first to BCD and then from BCD to ASCII (although
>this may well not be the best way). Microchip have published "Binary 16
>to BCD" code in AN526, and I also have "B24toBCD" which is just an
>extended version of the 16-bit program. It looks straightforward to
>extend it further to 48 bits.
>
>The thing is, I'd simply be working "parrot fashion". I can't suss out
>how the BCD-to-ASCII algorithm works!
>
>Can anyone explain in plain English the basic principle?
>
>Also, with a bit of luck that might allow me to work out how to avoid the
>intermediate BCD stage altogether and go straight from binary to ASCII.
>Or has anyone done that already?
>
>Thanks,
>
>Steve Thackery
>Suffolk, England.
>Web Site:
http://www.btinternet.com/~stevethack/
>
>"Having children is hereditary. If your parents didn't have any, neither
>will you." - Henry Morgan
>
>
Jim Ham, Porcine Associates
(650)326-2669 fax(650)326-1071
"http://www.porcine.com"
1999\09\05@171001
by
Steve Thackery
Wow! Anne and Jim, thanks for such comprehensive replies.
The spirit of helpfulness on this list is wonderful. Especially
impressive is your patience with PIC newbies like me.
Magic. Thanks very much, folks.
Steve Thackery
Suffolk, England.
Web Site: http://www.btinternet.com/~stevethack/
"Having children is hereditary. If your parents didn't have any, neither
will you." - Henry Morgan
1999\09\05@191824
by
William K. Borsum
|
At 01:31 PM 9/5/99 +0100, you wrote:
>I have a challenge. I need to convert a 48-bit binary number (stored in 6
>consecutive memory registers) into an ASCII string (i.e. '0' to '9') which
>will then get transmitted to a PC. The string will be up to 15 characters
>long.
We cheat. An 8-bit byte is composed of 2 each 4-bit nibbles. Four bits
covers the hex numbers 0-F, so we break down a byte into nibbles, convert
the nibble to the ascii representation of the hex number, and send that.
Thus the byte 3FH gets sent as two ascii characters: "3" and "F".
HOWEVER it doubles the number of characters sent.
The reasons we did it this way are buried in the dim and mystical
past--somewhere before the invention of dirt.
Why not just send an 8-bit byte as the ascii equivalent?
One possible reason would be ambiguity between data and a sync byte.
On one of our new systems, the host sends a single character trigger which
causes the slave to return a packet of data--since the packet is
pre-defined, it can contain anything.
Hope this helps.
Kelly
William K. Borsum, P.E. -- OEM Dataloggers and Instrumentation Systems
<spam_OUTborsumTakeThisOuT
dascor.com> & <http://www.dascor.com>
1999\09\06@172951
by
Steve Thackery
|
Thanks for your thoughts, Kelly. Your raise an interesting point:
> Why not just send an 8-bit byte as the ascii equivalent?
> One possible reason would be ambiguity between data and a sync byte.
That's the reason, in a nutshell. My current implementation sends the
pure binary as a "packet" of six bytes (topped and tailed with the usual
start and stop bits, of course). Trouble is, how do you tell when you've
got to the end of one packet and the start of the next? You can't
implement a "framing" byte because it isn't possible to assing a unique
bit pattern to it. The real data can contain all possible 256 values.
At the moment I'm cheating: I'm relying on the fact that there is always a
short interval of time between packets. This means I have to implement a
"timeout" in the receiving software. This works fine in NT, which is
almost-real-time multitasking, but under W95 the software fails
intermittently when the operating system is away doing something else.
Hence the decision to convert the whole lot to an ASCII string. Then I
can use a null, or even a carriage return, to signify the end of a string.
It also makes the data file that gets recorded on the PC human readable.
The hex approach is attractive. It means I can stick to ASCII characters
but the conversion becomes trivial: just a sixteen bit look-up table.
I'll have to think about how to sort it out in the receiving software.
Shouldn't be too difficult.
Thanks again.
Steve Thackery
Suffolk, England.
Web Site: http://www.btinternet.com/~stevethack/
"Having children is hereditary. If your parents didn't have any, neither
will you." - Henry Morgan
1999\09\06@180459
by
Bob Drzyzgula
|
On Mon, Sep 06, 1999 at 10:27:44PM +0100, Steve Thackery wrote:
> Thanks for your thoughts, Kelly. Your raise an interesting point:
>
> > Why not just send an 8-bit byte as the ascii equivalent?
> > One possible reason would be ambiguity between data and a sync byte.
>
> That's the reason, in a nutshell. My current implementation sends the
> pure binary as a "packet" of six bytes (topped and tailed with the usual
> start and stop bits, of course). Trouble is, how do you tell when you've
> got to the end of one packet and the start of the next? You can't
> implement a "framing" byte because it isn't possible to assing a unique
> bit pattern to it. The real data can contain all possible 256 values.
If that's what it is, perhaps you might consider using
the uuencode format. It isn't the most effecient of all
protocols, but it is pretty well documented and brain-dead
easy to implement. Here's a description of uuencode
from the uuencode(5) man page on my Linux machine:
"Groups of 3 bytes are stored in 4 characters,
6 bits per character. All are offset by a
space to make the characters printing. The last
line may be shorter than the normal 45 bytes.
If the size is not a multiple of 3, this fact
can be determined by the value of the count on
the last line. Extra garbage will be included
to make the character count a multiple of 4.
The body is terminated by a line with a count
of zero. This line consists of one ASCII space."
So basically they take a few bits of each number and
stuff it into a full byte, offset by a fixed value that
guarantees that the result will be a printable ASCII
character. With your 48-bit application, each number can
easily be stuffed into 8 ascii characters. On the PIC side,
you should be able to do most of the bit-twiddling with
rotates and xors and stuff like that.
Just an idea.
--Bob
--
============================================================
Bob Drzyzgula It's not a problem
.....bobKILLspam
@spam@drzyzgula.org until something bad happens
============================================================
http://www.drzyzgula.org/bob/electronics/
============================================================
1999\09\06@180913
by
paulb
Steve Thackery wrote:
> Trouble is, how do you tell when you've got to the end of one packet
> and the start of the next? You can't implement a "framing" byte
> because it isn't possible to assing a unique bit pattern to it. The
> real data can contain all possible 256 values.
Ah! FAQ!
This was thought out long ago, and that's what *ASCII* is all about!
You do indeed use "framing" bytes, called STX and ETX (Start Trans-
mission and End Transmission), to define your packet. A packet may be
preceded by a preamble for various reasons, which can be NULs.
Of course you're right. Individual data bytes could take on the byte
values of STX or ETX, so they are always preceded by a DLE (Data Link
Escape) character. And of course, you may need to send the value of DLE
so you prefix it with itself.
And that's about it! The packet length is not constant due to the
prefixing, but it's maximally efficient. It's also dead easy to code.
--
Cheers,
Paul B.
1999\09\07@005331
by
William K. Borsum
|
At 10:27 PM 9/6/99 +0100, you wrote:
{Quote hidden}>Thanks for your thoughts, Kelly. Your raise an interesting point:
>
>> Why not just send an 8-bit byte as the ascii equivalent?
>> One possible reason would be ambiguity between data and a sync byte.
>
>That's the reason, in a nutshell. My current implementation sends the
>pure binary as a "packet" of six bytes (topped and tailed with the usual
>start and stop bits, of course). Trouble is, how do you tell when you've
>got to the end of one packet and the start of the next? You can't
>implement a "framing" byte because it isn't possible to assing a unique
>bit pattern to it. The real data can contain all possible 256 values.
>
>At the moment I'm cheating: I'm relying on the fact that there is always a
>short interval of time between packets. This means I have to implement a
>"timeout" in the receiving software. This works fine in NT, which is
>almost-real-time multitasking, but under W95 the software fails
>intermittently when the operating system is away doing something else.
>
>Hence the decision to convert the whole lot to an ASCII string. Then I
>can use a null, or even a carriage return, to signify the end of a string.
>It also makes the data file that gets recorded on the PC human readable.
>
>The hex approach is attractive. It means I can stick to ASCII characters
>but the conversion becomes trivial: just a sixteen bit look-up table.
>I'll have to think about how to sort it out in the receiving software.
>Shouldn't be too difficult.
Its not. Extract a pair of ascii characters that form a byte using the
usual mid$(string,x,2) then derive the value of the hex string.
Nibble 1 * 16 + nibble 2. Vbasic has a bunch of direct conversions which
we put into functions to do all this automatically--just passed a string of
two characters and got back a integer and an ASCII character corresponding
to the integer.
Kelly
William K. Borsum, P.E. -- OEM Dataloggers and Instrumentation Systems
<borsum
KILLspamdascor.com> & <http://www.dascor.com>
1999\09\07@125950
by
James M. Newton
1999\09\07@165307
by
paulb
James Newton wrote:
> Why not a length byte at the beginning? Dead easiest to code. Divide
> longer xmitions up into 255 byte packets.
Probably because if you had a transmission error, you would never
manage to resync. You need flags (and escapes) to do so.
--
Cheers,
Paul B.
1999\09\07@171619
by
eplus1
Errr... RIGHT! I knew that!! Just testing! <GRIN>
If the need is for only one "escape" value for resyncing, then escape only
one or two (infrequently used) characters and pass all the rest as binary.
Paul is right, now that I think about it, that is what the ASCII control
characters were designed for.
Isn't this the same sort of thing that is done in PPP? (Notice how I
cleverly remembered not to say PPP Protocol?)
James Newton, webmaster http://get.to/techref
(hint: you can add your own private info to the techref)
EraseMEjamesnewtonspam_OUT
TakeThisOuTgeocities.com
1-619-652-0593 phone
{Original Message removed}
1999\09\07@175819
by
paulb
James Newton wrote:
> Isn't this the same sort of thing that is done in PPP? (Notice how I
> cleverly remembered not to say PPP Protocol?)
AFAIK, that is *exactly* what PPP does. But only AFAIK and that's not
actually much. If I only had more time to delve the Linux docs...
--
Cheers,
Paul B.
1999\09\07@181304
by
Sean H. Breheny
In the configuration stage of PPP, IIRC, two sets of "control characters"
(which need to be escaped) are negotiated by the two sides. Each set
specifies which characters need to be escaped for that side to receive them
correctly.
It is actually quite interesting to see what a jumble of protocols a TCP/IP
stack is. PPP is like a heavily modified HDLC and includes a CRC. IP then
goes on top of that,and has no error control over the contents, only the
header. Then TCP or UDP add another layer of checksum to protect the data,
and TCP also includes indexing to the packets to ensure reassembly in the
correct order.
Sean
At 02:11 PM 9/7/99 -0700, you wrote:
{Quote hidden}>Errr... RIGHT! I knew that!! Just testing! <GRIN>
>
>If the need is for only one "escape" value for resyncing, then escape only
>one or two (infrequently used) characters and pass all the rest as binary.
>Paul is right, now that I think about it, that is what the ASCII control
>characters were designed for.
>
>Isn't this the same sort of thing that is done in PPP? (Notice how I
>cleverly remembered not to say PPP Protocol?)
>
>James Newton, webmaster
http://get.to/techref
>(hint: you can add your own private info to the techref)
>
jamesnewton
spam_OUTgeocities.com
>1-619-652-0593 phone
>
>
>
>{Original Message removed}
More... (looser matching)
- Last day of these posts
- In 1999
, 2000 only
- Today
- New search...