Will, There may be a subtle bug in exectuable code produced by your compiler. I am sending you an example program that displays wierd behaviour when compiled using ghc, but not when compiled using hbc. The results have been check by hand against a commercial C-based signal processing tool, and the hbc numbers agree (within numercial precision limitations) to the commerical package. Most of the ghc numbers agree too, but some are off in a systematic way. NOTE: This program uses the Native module, so perhaps the problem lies there; I don't know. The package contains the following: 1) a module LPA.lhs 2) a main program Main.lhs 3) a Makefile, set up for ghc 0.19 and hbc (you can use 0.999.5 if you want) 4) an example speech file To compile, edit the Makefile and comment/uncomment the definitions for which compiler you want to use; then type make. Run the program first for hbc: % lpa speech speech.cep.hbc Then ``make clean'', edit the Makefile, remake and run for ghc: % lpa speech speech.cep.ghc Now look at the first 10 lines of each ``cep'' file using od: % od -fv speech.cep.hbc | head -10 % od -fv speech.cep.ghc | head -10 Notice that the numbers pretty much agree, except for the 1st, the 18th, the 35th, etc. You see, the program analyzes frames of speech 100 times per second. For each analysis frame, it dumps 17 floating point numbers. Your program disagrees with the hbc program about the FIRST coordinate of each of these vectors. I don't understand how this can happen. I'm writing an applications paper for JFP, but I can't include a comparison to your compiler if it produces bad numbers. So... I know your busy, but could you look at this soon? Good luck!! dave g.