First proof of concept of a LLVM driven backend for the neko virtual machine

classic Classic list List threaded Threaded
21 messages Options
12
Reply | Threaded
Open this post in threaded view
|

First proof of concept of a LLVM driven backend for the neko virtual machine

alstrup
Hi,

Vadim Atlygin has been working on a LLVM backend for Neko, and we are
happy to announce that "Hello world" works now.

For LLVM readers, Neko is a tight virtual machine, which is primarily
targetted by haXe. haXe is a statically typed programming language
that has targets to JavaScript, Flash, as well as Neko. Both haXe and
Neko are developed by Nicolas Cannasse, with contributions from many
others. You can read more about haXe and Neko at www.haxe.org and
www.nekovm.org.

The current neko vm has a x86 JIT. The aim of this work is to get the
Neko VM to run fast on 64 bit machines, and hopefully improve
performance on 32 bit targets as well.

You can get the code from this repository:
http://github.com/vava/neko_llvm_jit

The work is still incomplete - only the first 10 opcodes out of 66 are
implemented, but there is enough to get "Hello world" and simple
arithmetic to work. The current port is still slower than the existing
VM, but that is expected. The next steps are to implement the rest of
the opcodes. At first, we will just plug them up to use c callbacks,
and then as a next step, convert them to real LLVM bytecodes later.
Until we have real LLVM produced for the opcodes, it will be slower
than the original VM.

If you want to help out, you can contribute test cases for opcodes,
implement c callback implementations for some of the opcodes, or even
better optimized code for opcodes. We propose to use the Neko mailing
list for discussion on this.

Regards,
Vadim Atlygin & Asger Ottar Alstrup

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Franco Ponticelli
Great and very appreaciated news!
Out of curiosity, why have you decided to translate Neko to LLVM instead of creating a new target for haXe?
This project is awesome ;)

Franco.

On Wed, May 19, 2010 at 3:06 PM, Asger Ottar Alstrup <[hidden email]> wrote:
Hi,

Vadim Atlygin has been working on a LLVM backend for Neko, and we are
happy to announce that "Hello world" works now.

For LLVM readers, Neko is a tight virtual machine, which is primarily
targetted by haXe. haXe is a statically typed programming language
that has targets to JavaScript, Flash, as well as Neko. Both haXe and
Neko are developed by Nicolas Cannasse, with contributions from many
others. You can read more about haXe and Neko at www.haxe.org and
www.nekovm.org.

The current neko vm has a x86 JIT. The aim of this work is to get the
Neko VM to run fast on 64 bit machines, and hopefully improve
performance on 32 bit targets as well.

You can get the code from this repository:
http://github.com/vava/neko_llvm_jit

The work is still incomplete - only the first 10 opcodes out of 66 are
implemented, but there is enough to get "Hello world" and simple
arithmetic to work. The current port is still slower than the existing
VM, but that is expected. The next steps are to implement the rest of
the opcodes. At first, we will just plug them up to use c callbacks,
and then as a next step, convert them to real LLVM bytecodes later.
Until we have real LLVM produced for the opcodes, it will be slower
than the original VM.

If you want to help out, you can contribute test cases for opcodes,
implement c callback implementations for some of the opcodes, or even
better optimized code for opcodes. We propose to use the Neko mailing
list for discussion on this.

Regards,
Vadim Atlygin & Asger Ottar Alstrup

--
Neko : One VM to run them all
(http://nekovm.org)


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Lee Sylvester
In reply to this post by alstrup
Excellent work. This is a very cool idea. Do you have any clue as to the
kind of overall speed increases we'll likely expect when this project
reaches its target milestone?

Cheers,
Lee




Asger Ottar Alstrup wrote:

> Hi,
>
> Vadim Atlygin has been working on a LLVM backend for Neko, and we are
> happy to announce that "Hello world" works now.
>
> For LLVM readers, Neko is a tight virtual machine, which is primarily
> targetted by haXe. haXe is a statically typed programming language
> that has targets to JavaScript, Flash, as well as Neko. Both haXe and
> Neko are developed by Nicolas Cannasse, with contributions from many
> others. You can read more about haXe and Neko at www.haxe.org and
> www.nekovm.org.
>
> The current neko vm has a x86 JIT. The aim of this work is to get the
> Neko VM to run fast on 64 bit machines, and hopefully improve
> performance on 32 bit targets as well.
>
> You can get the code from this repository:
> http://github.com/vava/neko_llvm_jit
>
> The work is still incomplete - only the first 10 opcodes out of 66 are
> implemented, but there is enough to get "Hello world" and simple
> arithmetic to work. The current port is still slower than the existing
> VM, but that is expected. The next steps are to implement the rest of
> the opcodes. At first, we will just plug them up to use c callbacks,
> and then as a next step, convert them to real LLVM bytecodes later.
> Until we have real LLVM produced for the opcodes, it will be slower
> than the original VM.
>
> If you want to help out, you can contribute test cases for opcodes,
> implement c callback implementations for some of the opcodes, or even
> better optimized code for opcodes. We propose to use the Neko mailing
> list for discussion on this.
>
> Regards,
> Vadim Atlygin & Asger Ottar Alstrup
>
>  


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Gamehaxe
In reply to this post by Franco Ponticelli
Do you think you can slip in 32-bit ints while you are at it?  :)

Hugh

> Great and very appreaciated news!
> Out of curiosity, why have you decided to translate Neko to LLVM instead  
> of
> creating a new target for haXe?
> This project is awesome ;)
>
> Franco.
>
> On Wed, May 19, 2010 at 3:06 PM, Asger Ottar Alstrup <[hidden email]>  
> wrote:
>
>> Hi,
>>
>> Vadim Atlygin has been working on a LLVM backend for Neko, and we are
>> happy to announce that "Hello world" works now.
>>
>> For LLVM readers, Neko is a tight virtual machine, which is primarily
>> targetted by haXe. haXe is a statically typed programming language
>> that has targets to JavaScript, Flash, as well as Neko. Both haXe and
>> Neko are developed by Nicolas Cannasse, with contributions from many
>> others. You can read more about haXe and Neko at www.haxe.org and
>> www.nekovm.org.
>>
>> The current neko vm has a x86 JIT. The aim of this work is to get the
>> Neko VM to run fast on 64 bit machines, and hopefully improve
>> performance on 32 bit targets as well.
>>
>> You can get the code from this repository:
>> http://github.com/vava/neko_llvm_jit
>>
>> The work is still incomplete - only the first 10 opcodes out of 66 are
>> implemented, but there is enough to get "Hello world" and simple
>> arithmetic to work. The current port is still slower than the existing
>> VM, but that is expected. The next steps are to implement the rest of
>> the opcodes. At first, we will just plug them up to use c callbacks,
>> and then as a next step, convert them to real LLVM bytecodes later.
>> Until we have real LLVM produced for the opcodes, it will be slower
>> than the original VM.
>>
>> If you want to help out, you can contribute test cases for opcodes,
>> implement c callback implementations for some of the opcodes, or even
>> better optimized code for opcodes. We propose to use the Neko mailing
>> list for discussion on this.
>>
>> Regards,
>> Vadim Atlygin & Asger Ottar Alstrup
>>
>> --
>> Neko : One VM to run them all
>> (http://nekovm.org)

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Franco Ponticelli

On Wed, May 19, 2010 at 3:38 PM, Hugh Sanderson <[hidden email]> wrote:
Do you think you can slip in 32-bit ints while you are at it?  :)

+1

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Vadim Atlygin
In reply to this post by Gamehaxe
Neko opcodes are typeless, it'll be really hard to achieve, I would have to build some kind on type inference system around them to find out is this value an actual int or pointer to something.
Although I was thinking about making some optimizations to get rid of millions of those little objects neko stores everything non-trivial. But that is a plan for a really distant future.

Best regards,
Vadim.

On Thu, May 20, 2010 at 12:38 AM, Hugh Sanderson <[hidden email]> wrote:
Do you think you can slip in 32-bit ints while you are at it?  :)

Hugh


Great and very appreaciated news!
Out of curiosity, why have you decided to translate Neko to LLVM instead of
creating a new target for haXe?
This project is awesome ;)

Franco.

On Wed, May 19, 2010 at 3:06 PM, Asger Ottar Alstrup <[hidden email]> wrote:

Hi,

Vadim Atlygin has been working on a LLVM backend for Neko, and we are
happy to announce that "Hello world" works now.

For LLVM readers, Neko is a tight virtual machine, which is primarily
targetted by haXe. haXe is a statically typed programming language
that has targets to JavaScript, Flash, as well as Neko. Both haXe and
Neko are developed by Nicolas Cannasse, with contributions from many
others. You can read more about haXe and Neko at www.haxe.org and
www.nekovm.org.

The current neko vm has a x86 JIT. The aim of this work is to get the
Neko VM to run fast on 64 bit machines, and hopefully improve
performance on 32 bit targets as well.

You can get the code from this repository:
http://github.com/vava/neko_llvm_jit

The work is still incomplete - only the first 10 opcodes out of 66 are
implemented, but there is enough to get "Hello world" and simple
arithmetic to work. The current port is still slower than the existing
VM, but that is expected. The next steps are to implement the rest of
the opcodes. At first, we will just plug them up to use c callbacks,
and then as a next step, convert them to real LLVM bytecodes later.
Until we have real LLVM produced for the opcodes, it will be slower
than the original VM.

If you want to help out, you can contribute test cases for opcodes,
implement c callback implementations for some of the opcodes, or even
better optimized code for opcodes. We propose to use the Neko mailing
list for discussion on this.

Regards,
Vadim Atlygin & Asger Ottar Alstrup

--
Neko : One VM to run them all
(http://nekovm.org)

--
haXe - an open source web programming language
http://haxe.org


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Cauê W.
That's fantastic news!

Congratulations for the achievement! : )

Cheers
Cauê

2010/5/19 Vadim Atlygin <[hidden email]>
Neko opcodes are typeless, it'll be really hard to achieve, I would have to build some kind on type inference system around them to find out is this value an actual int or pointer to something.
Although I was thinking about making some optimizations to get rid of millions of those little objects neko stores everything non-trivial. But that is a plan for a really distant future.

Best regards,
Vadim.


On Thu, May 20, 2010 at 12:38 AM, Hugh Sanderson <[hidden email]> wrote:
Do you think you can slip in 32-bit ints while you are at it?  :)

Hugh


Great and very appreaciated news!
Out of curiosity, why have you decided to translate Neko to LLVM instead of
creating a new target for haXe?
This project is awesome ;)

Franco.

On Wed, May 19, 2010 at 3:06 PM, Asger Ottar Alstrup <[hidden email]> wrote:

Hi,

Vadim Atlygin has been working on a LLVM backend for Neko, and we are
happy to announce that "Hello world" works now.

For LLVM readers, Neko is a tight virtual machine, which is primarily
targetted by haXe. haXe is a statically typed programming language
that has targets to JavaScript, Flash, as well as Neko. Both haXe and
Neko are developed by Nicolas Cannasse, with contributions from many
others. You can read more about haXe and Neko at www.haxe.org and
www.nekovm.org.

The current neko vm has a x86 JIT. The aim of this work is to get the
Neko VM to run fast on 64 bit machines, and hopefully improve
performance on 32 bit targets as well.

You can get the code from this repository:
http://github.com/vava/neko_llvm_jit

The work is still incomplete - only the first 10 opcodes out of 66 are
implemented, but there is enough to get "Hello world" and simple
arithmetic to work. The current port is still slower than the existing
VM, but that is expected. The next steps are to implement the rest of
the opcodes. At first, we will just plug them up to use c callbacks,
and then as a next step, convert them to real LLVM bytecodes later.
Until we have real LLVM produced for the opcodes, it will be slower
than the original VM.

If you want to help out, you can contribute test cases for opcodes,
implement c callback implementations for some of the opcodes, or even
better optimized code for opcodes. We propose to use the Neko mailing
list for discussion on this.

Regards,
Vadim Atlygin & Asger Ottar Alstrup

--
Neko : One VM to run them all
(http://nekovm.org)

--
haXe - an open source web programming language
http://haxe.org


--
haXe - an open source web programming language
http://haxe.org


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

alstrup
In reply to this post by alstrup
Hi,

We did try to work with the hxcpp backend for haXe, but even after
spending quite some time and submitting patches for it, it seemed like
it would be very hard to get it to production quality with our
codebase. We have probably 100,000 lines of haXe code targetting Neko,
and it just seemed that our code triggered many problems in the hxcpp
backend. It could be that our code relies on implementation details of
the Neko VM, and hxcpp is technically correct, but after trying hard,
we just gave up. On the other hand, we really need the code to run
faster, so we have to do something. Nicolas suggested adding a LLVM
backend to the Neko code as the most robust way to do this, so this is
what we are trying to. We took the code for Neko, and are adding a new
LLVM backend, in addition to the interpreter and the x86 JIT.

We hope the advantage of LLVM is speed, as well as provide support for
multiple targets. The current Neko JIT only targets x86, while LLVM
can provide JITs for more targets. For instance, LLVM does support
ARM, so this work could be a way to target such systems.

And when it comes to speed for C and C++ code, LLVM is competetive
with gcc in many situations. As another case in point, see the new
Haskell backend: That achieves great performance, even when it was not
tuned except for a custom calling convention. The Haskell developers
will be optimizing their LLVM backend this summer, and there is a fair
chance that it will beat the existing native backend they have. See
for instance

http://donsbot.wordpress.com/2010/02/21/smoking-fast-haskell-code-using-ghcs-new-llvm-codegen/

and

http://blog.llvm.org/2010/05/glasgow-haskell-compiler-and-llvm.html

LLVM is a mature project which will most likely be maintained and
improved for many years, since it is becoming the system compiler for
MacOS.

Our primary aim is to speed up our haXe code targetting Neko on 64 bit
machines. We don't know if we will achieve that, but we hope we will.
The biggest challenge is probably that Neko is a dynamic language,
rather than a purely statically typed language like Haskell, so it is
hard to tell whether we will achieve the speedup, and if we do, how
much it will be. But it seems like the best bet for our situation, and
we hope that the contribution will be valuable for others as well.

The code is really easy to download and install if you have a Linux
setup. It is also trivial to add new test cases: Just write a .neko
file in the tests folder that produces a small output, and type
"rake". It should run the test with the interpreter, the JIT and the
LLVM, and compare the output. Repeat until tired, and then send a
patch to us! And once you have done that, adding the implementation of
an opcode is also simple as long as it is just a callback to C. So I
invite all to give it a try.

So in conclusion, I hope that we will be able to reach a substantial
speedup. In our testing, PHP is faster than haXe on neko in many
practical cases, so we really should try to speed things up.

Regards,
Asger

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Nicolas Cannasse
> Our primary aim is to speed up our haXe code targetting Neko on 64 bit
> machines. We don't know if we will achieve that, but we hope we will.
> The biggest challenge is probably that Neko is a dynamic language,
> rather than a purely statically typed language like Haskell, so it is
> hard to tell whether we will achieve the speedup, and if we do, how
> much it will be. But it seems like the best bet for our situation, and
> we hope that the contribution will be valuable for others as well.

One possibility I was thinking about was to be able to specify type
restrictions on neko function parameters. This would at least enable to
remove some checks and do some type inference on the local variables as
well. This could be supported in the JIT/Interpreter as well with
minimum changes.

> So in conclusion, I hope that we will be able to reach a substantial
> speedup. In our testing, PHP is faster than haXe on neko in many
> practical cases, so we really should try to speed things up.

This is surprising news since our benchmarks doesn't show such results.
I would be interested in knowing what measurements you made and which
results you got. Is this specific to 64-bit or also apply to 32-bit
systems ?

Best,
Nicolas

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

alstrup
Hi,

On Thu, May 20, 2010 at 10:32 AM, Nicolas Cannasse
<[hidden email]> wrote:
>> So in conclusion, I hope that we will be able to reach a substantial
>> speedup. In our testing, PHP is faster than haXe on neko in many
>> practical cases, so we really should try to speed things up.
>
> This is surprising news since our benchmarks doesn't show such results. I
> would be interested in knowing what measurements you made and which results
> you got. Is this specific to 64-bit or also apply to 32-bit systems ?

This is a 64 bit EC2 cloud, so no neko JIT. One simple example is just
to handle a web request, pull out parameters, prepare a quoted SQL
string, insert a row in the database, and return a simple JSON
encoding of the result. Very simple code. With bytecode cached PHP, we
could be 2.000 requests a second with a persistent database
connection. With haXe on neko interpreter, it was 1000. Yes, we could
probably rewrite the code to use native neko Arrays, do custom JSON
encoding, or other tricks, but with so much code as we have, that is
not really realistic.
So maybe the problem lies in the haXe JSON encoding library, maybe it
lies in the MySQL wrapper, maybe it lies in the decoding of http
parameters, maybe it is in the GC, maybe it is mod_neko or mod_tora.
But it is a general picture we have found: When we take the time to
really profile the code, often it is possible to get the code to run
faster by rewriting it. But the problem is that we have so much code
that it seems better to try to get the basic things running fast
enough that we do not have to profile and rewrite 100,000 lines of
code. And this is especially a problem since there is really no good
profiler for haXe, so it is often quite some work just to identify
which part of the code is slow. We end up inserting a lot of timing
calls all over the code, only to find out that the GC is taking up all
the time. And then the job is completely different: Now you have to
find out why we produce so much memory traffic, and there really are
no good tools for analyzing that.

Regards,
Asger

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Tony Polinelli
In reply to this post by Nicolas Cannasse
LLVM sounds interesting, but wouldnt the existing c++ target allow the native speed increase that you are looking for? Are there other benefits to such a target? 


On Thu, May 20, 2010 at 6:32 PM, Nicolas Cannasse <[hidden email]> wrote:
Our primary aim is to speed up our haXe code targetting Neko on 64 bit
machines. We don't know if we will achieve that, but we hope we will.
The biggest challenge is probably that Neko is a dynamic language,
rather than a purely statically typed language like Haskell, so it is
hard to tell whether we will achieve the speedup, and if we do, how
much it will be. But it seems like the best bet for our situation, and
we hope that the contribution will be valuable for others as well.

One possibility I was thinking about was to be able to specify type restrictions on neko function parameters. This would at least enable to remove some checks and do some type inference on the local variables as well. This could be supported in the JIT/Interpreter as well with minimum changes.


So in conclusion, I hope that we will be able to reach a substantial
speedup. In our testing, PHP is faster than haXe on neko in many
practical cases, so we really should try to speed things up.

This is surprising news since our benchmarks doesn't show such results. I would be interested in knowing what measurements you made and which results you got. Is this specific to 64-bit or also apply to 32-bit systems ?

Best,
Nicolas


--
haXe - an open source web programming language
http://haxe.org



--
Tony Polinelli
http://touchmypixel.com

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

alstrup
On Thu, May 20, 2010 at 10:52 AM, Tony Polinelli <[hidden email]> wrote:
> LLVM sounds interesting, but wouldnt the existing c++ target allow the
> native speed increase that you are looking for? Are there other benefits to
> such a target?

We tried to use the hxcpp target, but it was not stable for us. We
tried to fix all the bugs we could in that backend, but gave up since
there was not enough progress, and it seemed some of the dynamic parts
were unlikely to ever work.

Regards,
Asger

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Franco Ponticelli
In reply to this post by alstrup

This is a 64 bit EC2 cloud, so no neko JIT. One simple example is just
to handle a web request, pull out parameters, prepare a quoted SQL
string, insert a row in the database, and return a simple JSON
encoding of the result. Very simple code.


What JSON library are you using? It seems to me that there can be lot of string concatenations in there and Neko is not very fast on it, actually it is quite slow since it needs to build an object for every string fragment and PHP is faster for sure on that. Try give a good look at your code and libs and try to use as much as possible the StringBuf class. See if that helps (if not already done of course).

Franco


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

tommedema
This leads me to some things I heard about the c++ target not being good with string manipulations, aswell as the neko target. The neko target would also not be good at certain calculations because of a 31 int limitation, or something.

I came to these conclusions accidentally because of a discussion, this makes me wonder what else is out there that I do not know about.

Wouldn't it be wise to add these kind of limitations or important notes to the target sections on haxe.org? I can't do this as I lack knowledge and experience.

As for this LLVM, would it be possible to run haxe written neko apps with this llvm?

Regards,
Tom

2010/5/20 Franco Ponticelli <[hidden email]>

This is a 64 bit EC2 cloud, so no neko JIT. One simple example is just
to handle a web request, pull out parameters, prepare a quoted SQL
string, insert a row in the database, and return a simple JSON
encoding of the result. Very simple code.


What JSON library are you using? It seems to me that there can be lot of string concatenations in there and Neko is not very fast on it, actually it is quite slow since it needs to build an object for every string fragment and PHP is faster for sure on that. Try give a good look at your code and libs and try to use as much as possible the StringBuf class. See if that helps (if not already done of course).

Franco


--
haXe - an open source web programming language
http://haxe.org


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Gamehaxe
In reply to this post by Nicolas Cannasse
Hi,
Yes, I do not really see a big performance gain by using LLVM/JIT
unless you get the types right.  Consider the addition below:

class A { public var x:Float; }
var a = new A();

var y = a.x + a.x;

On the simplest level,
you would run
Variant temp1 = find member "x" in variant a
Variant temp2 = find member "x" in variant a
Variant y = sum_double temp1, temp2

Whether you "switch(OP_CODE)" or JIT the operations, your code will
still be limited by dominated by "find member 'x' in a", and also the
sum operation, which will have to do something like:
VariantOfDouble( DoubleOfVariant(temp1), DoubleOfVariant(temp2))

This would need to be inlined to get any performance gain.

If temp1 & temp2 were kept as native doubles, then you would see some nice  
gains.

Finally, to get near native speed, you might consider a hybrid of fixed
and dynamic lookup.  So the implementation of the "A" class may go  
something like:

struct A
{
   PrototypeMap *name_to_member_map;
   InstanceMap  *additional_member_map;
   double x;
};

And the runtime can go "I know that 'a' is of type 'A', therefore I will  
find
the offset and location of a double at "base + 8".  In the case of unknown  
type, it can
look in the prototype map to find the offset, and then perhaps in the  
instance
map if you have dynamic members.

Javascript seems to have some some very impressive type inference and
can run at remarkable speeds these days.  It might be worth studying
these implementations for additional ideas, because it seems neko is
at where JS was a few years ago.

Hugh


>> Our primary aim is to speed up our haXe code targetting Neko on 64 bit
>> machines. We don't know if we will achieve that, but we hope we will.
>> The biggest challenge is probably that Neko is a dynamic language,
>> rather than a purely statically typed language like Haskell, so it is
>> hard to tell whether we will achieve the speedup, and if we do, how
>> much it will be. But it seems like the best bet for our situation, and
>> we hope that the contribution will be valuable for others as well.
>
> One possibility I was thinking about was to be able to specify type  
> restrictions on neko function parameters. This would at least enable to  
> remove some checks and do some type inference on the local variables as  
> well. This could be supported in the JIT/Interpreter as well with  
> minimum changes.
>
>> So in conclusion, I hope that we will be able to reach a substantial
>> speedup. In our testing, PHP is faster than haXe on neko in many
>> practical cases, so we really should try to speed things up.
>
> This is surprising news since our benchmarks doesn't show such results.  
> I would be interested in knowing what measurements you made and which  
> results you got. Is this specific to 64-bit or also apply to 32-bit  
> systems ?
>
> Best,
> Nicolas

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

blackdog-2
Hugh says ...

"Javascript seems to have some some very impressive type inference and
can run at remarkable speeds these days."

Or maybe just use an existing server side target like node.js with my
hxnode signatures? You may find it's faster, i haven't tested. My guess
is that the experience of the people doing V8 should not be quibbled
with - i think it's quadrupled in speed since first announced.

In hxNode I've implemented FileSystem but not all of haxe.io,
implementing the missing pieces would be quicker than the llvm port, but
I certainly applaud that effort or anything associated with haxe/llvm.
The main issue with node.js in terms of existing haxe is it's all async
and does not fit the existing haxe api very well, I think an official
async api/interface for haxe from Nicolas would be welcome even if not
implemented officially.

bd


On Thu, 2010-05-20 at 21:01 +0800, Hugh Sanderson wrote:

> Hi,
> Yes, I do not really see a big performance gain by using LLVM/JIT
> unless you get the types right.  Consider the addition below:
>
> class A { public var x:Float; }
> var a = new A();
>
> var y = a.x + a.x;
>
> On the simplest level,
> you would run
> Variant temp1 = find member "x" in variant a
> Variant temp2 = find member "x" in variant a
> Variant y = sum_double temp1, temp2
>
> Whether you "switch(OP_CODE)" or JIT the operations, your code will
> still be limited by dominated by "find member 'x' in a", and also the
> sum operation, which will have to do something like:
> VariantOfDouble( DoubleOfVariant(temp1), DoubleOfVariant(temp2))
>
> This would need to be inlined to get any performance gain.
>
> If temp1 & temp2 were kept as native doubles, then you would see some nice  
> gains.
>
> Finally, to get near native speed, you might consider a hybrid of fixed
> and dynamic lookup.  So the implementation of the "A" class may go  
> something like:
>
> struct A
> {
>    PrototypeMap *name_to_member_map;
>    InstanceMap  *additional_member_map;
>    double x;
> };
>
> And the runtime can go "I know that 'a' is of type 'A', therefore I will  
> find
> the offset and location of a double at "base + 8".  In the case of unknown  
> type, it can
> look in the prototype map to find the offset, and then perhaps in the  
> instance
> map if you have dynamic members.
>
> Javascript seems to have some some very impressive type inference and
> can run at remarkable speeds these days.  It might be worth studying
> these implementations for additional ideas, because it seems neko is
> at where JS was a few years ago.
>
> Hugh
>
>
> >> Our primary aim is to speed up our haXe code targetting Neko on 64 bit
> >> machines. We don't know if we will achieve that, but we hope we will.
> >> The biggest challenge is probably that Neko is a dynamic language,
> >> rather than a purely statically typed language like Haskell, so it is
> >> hard to tell whether we will achieve the speedup, and if we do, how
> >> much it will be. But it seems like the best bet for our situation, and
> >> we hope that the contribution will be valuable for others as well.
> >
> > One possibility I was thinking about was to be able to specify type  
> > restrictions on neko function parameters. This would at least enable to  
> > remove some checks and do some type inference on the local variables as  
> > well. This could be supported in the JIT/Interpreter as well with  
> > minimum changes.
> >
> >> So in conclusion, I hope that we will be able to reach a substantial
> >> speedup. In our testing, PHP is faster than haXe on neko in many
> >> practical cases, so we really should try to speed things up.
> >
> > This is surprising news since our benchmarks doesn't show such results.  
> > I would be interested in knowing what measurements you made and which  
> > results you got. Is this specific to 64-bit or also apply to 32-bit  
> > systems ?
> >
> > Best,
> > Nicolas
>


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

John A. De Goes

Indeed, server-side JavaScript is much faster than Neko. Also, an asynchronous architected standard-library is more portable than synchronous, since asynchronous can always be emulated with synchronous, but the reverse is not true.

Regards,

John

On May 20, 2010, at 7:30 AM, blackdog wrote:

> Hugh says ...
>
> "Javascript seems to have some some very impressive type inference and
> can run at remarkable speeds these days."
>
> Or maybe just use an existing server side target like node.js with my
> hxnode signatures? You may find it's faster, i haven't tested. My guess
> is that the experience of the people doing V8 should not be quibbled
> with - i think it's quadrupled in speed since first announced.
>
> In hxNode I've implemented FileSystem but not all of haxe.io,
> implementing the missing pieces would be quicker than the llvm port, but
> I certainly applaud that effort or anything associated with haxe/llvm.
> The main issue with node.js in terms of existing haxe is it's all async
> and does not fit the existing haxe api very well, I think an official
> async api/interface for haxe from Nicolas would be welcome even if not
> implemented officially.
>
> bd
>
>
> On Thu, 2010-05-20 at 21:01 +0800, Hugh Sanderson wrote:
>> Hi,
>> Yes, I do not really see a big performance gain by using LLVM/JIT
>> unless you get the types right.  Consider the addition below:
>>
>> class A { public var x:Float; }
>> var a = new A();
>>
>> var y = a.x + a.x;
>>
>> On the simplest level,
>> you would run
>> Variant temp1 = find member "x" in variant a
>> Variant temp2 = find member "x" in variant a
>> Variant y = sum_double temp1, temp2
>>
>> Whether you "switch(OP_CODE)" or JIT the operations, your code will
>> still be limited by dominated by "find member 'x' in a", and also the
>> sum operation, which will have to do something like:
>> VariantOfDouble( DoubleOfVariant(temp1), DoubleOfVariant(temp2))
>>
>> This would need to be inlined to get any performance gain.
>>
>> If temp1 & temp2 were kept as native doubles, then you would see some nice  
>> gains.
>>
>> Finally, to get near native speed, you might consider a hybrid of fixed
>> and dynamic lookup.  So the implementation of the "A" class may go  
>> something like:
>>
>> struct A
>> {
>>   PrototypeMap *name_to_member_map;
>>   InstanceMap  *additional_member_map;
>>   double x;
>> };
>>
>> And the runtime can go "I know that 'a' is of type 'A', therefore I will  
>> find
>> the offset and location of a double at "base + 8".  In the case of unknown  
>> type, it can
>> look in the prototype map to find the offset, and then perhaps in the  
>> instance
>> map if you have dynamic members.
>>
>> Javascript seems to have some some very impressive type inference and
>> can run at remarkable speeds these days.  It might be worth studying
>> these implementations for additional ideas, because it seems neko is
>> at where JS was a few years ago.
>>
>> Hugh
>>
>>
>>>> Our primary aim is to speed up our haXe code targetting Neko on 64 bit
>>>> machines. We don't know if we will achieve that, but we hope we will.
>>>> The biggest challenge is probably that Neko is a dynamic language,
>>>> rather than a purely statically typed language like Haskell, so it is
>>>> hard to tell whether we will achieve the speedup, and if we do, how
>>>> much it will be. But it seems like the best bet for our situation, and
>>>> we hope that the contribution will be valuable for others as well.
>>>
>>> One possibility I was thinking about was to be able to specify type  
>>> restrictions on neko function parameters. This would at least enable to  
>>> remove some checks and do some type inference on the local variables as  
>>> well. This could be supported in the JIT/Interpreter as well with  
>>> minimum changes.
>>>
>>>> So in conclusion, I hope that we will be able to reach a substantial
>>>> speedup. In our testing, PHP is faster than haXe on neko in many
>>>> practical cases, so we really should try to speed things up.
>>>
>>> This is surprising news since our benchmarks doesn't show such results.  
>>> I would be interested in knowing what measurements you made and which  
>>> results you got. Is this specific to 64-bit or also apply to 32-bit  
>>> systems ?
>>>
>>> Best,
>>> Nicolas
>>
>
>
> --
> haXe - an open source web programming language
> http://haxe.org


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Niel Drummond-3
On Thu, May 20, 2010 at 07:40:11AM -0600, John A. De Goes wrote:

>
> Indeed, server-side JavaScript is much faster than Neko. Also, an asynchronous architected standard-library is more portable than synchronous, since asynchronous can always be emulated with synchronous, but the reverse is not true.
>
> Regards,
>
> John
>
> On May 20, 2010, at 7:30 AM, blackdog wrote:
>
> > Hugh says ...
> >
> > "Javascript seems to have some some very impressive type inference and
> > can run at remarkable speeds these days."
> >
> > Or maybe just use an existing server side target like node.js with my
> > hxnode signatures? You may find it's faster, i haven't tested. My guess
> > is that the experience of the people doing V8 should not be quibbled
> > with - i think it's quadrupled in speed since first announced.
> >

What is the standard way of deploying node.js ? Do you use fastcgi or standard cgi ? IMO this is the weak point of server-side js, otherwise from raw benchmarks javascript does quite well.

- Niel

> > In hxNode I've implemented FileSystem but not all of haxe.io,
> > implementing the missing pieces would be quicker than the llvm port, but
> > I certainly applaud that effort or anything associated with haxe/llvm.
> > The main issue with node.js in terms of existing haxe is it's all async
> > and does not fit the existing haxe api very well, I think an official
> > async api/interface for haxe from Nicolas would be welcome even if not
> > implemented officially.
> >
> > bd
> >
> >
> > On Thu, 2010-05-20 at 21:01 +0800, Hugh Sanderson wrote:
> >> Hi,
> >> Yes, I do not really see a big performance gain by using LLVM/JIT
> >> unless you get the types right.  Consider the addition below:
> >>
> >> class A { public var x:Float; }
> >> var a = new A();
> >>
> >> var y = a.x + a.x;
> >>
> >> On the simplest level,
> >> you would run
> >> Variant temp1 = find member "x" in variant a
> >> Variant temp2 = find member "x" in variant a
> >> Variant y = sum_double temp1, temp2
> >>
> >> Whether you "switch(OP_CODE)" or JIT the operations, your code will
> >> still be limited by dominated by "find member 'x' in a", and also the
> >> sum operation, which will have to do something like:
> >> VariantOfDouble( DoubleOfVariant(temp1), DoubleOfVariant(temp2))
> >>
> >> This would need to be inlined to get any performance gain.
> >>
> >> If temp1 & temp2 were kept as native doubles, then you would see some nice  
> >> gains.
> >>
> >> Finally, to get near native speed, you might consider a hybrid of fixed
> >> and dynamic lookup.  So the implementation of the "A" class may go  
> >> something like:
> >>
> >> struct A
> >> {
> >>   PrototypeMap *name_to_member_map;
> >>   InstanceMap  *additional_member_map;
> >>   double x;
> >> };
> >>
> >> And the runtime can go "I know that 'a' is of type 'A', therefore I will  
> >> find
> >> the offset and location of a double at "base + 8".  In the case of unknown  
> >> type, it can
> >> look in the prototype map to find the offset, and then perhaps in the  
> >> instance
> >> map if you have dynamic members.
> >>
> >> Javascript seems to have some some very impressive type inference and
> >> can run at remarkable speeds these days.  It might be worth studying
> >> these implementations for additional ideas, because it seems neko is
> >> at where JS was a few years ago.
> >>
> >> Hugh
> >>
> >>
> >>>> Our primary aim is to speed up our haXe code targetting Neko on 64 bit
> >>>> machines. We don't know if we will achieve that, but we hope we will.
> >>>> The biggest challenge is probably that Neko is a dynamic language,
> >>>> rather than a purely statically typed language like Haskell, so it is
> >>>> hard to tell whether we will achieve the speedup, and if we do, how
> >>>> much it will be. But it seems like the best bet for our situation, and
> >>>> we hope that the contribution will be valuable for others as well.
> >>>
> >>> One possibility I was thinking about was to be able to specify type  
> >>> restrictions on neko function parameters. This would at least enable to  
> >>> remove some checks and do some type inference on the local variables as  
> >>> well. This could be supported in the JIT/Interpreter as well with  
> >>> minimum changes.
> >>>
> >>>> So in conclusion, I hope that we will be able to reach a substantial
> >>>> speedup. In our testing, PHP is faster than haXe on neko in many
> >>>> practical cases, so we really should try to speed things up.
> >>>
> >>> This is surprising news since our benchmarks doesn't show such results.  
> >>> I would be interested in knowing what measurements you made and which  
> >>> results you got. Is this specific to 64-bit or also apply to 32-bit  
> >>> systems ?
> >>>
> >>> Best,
> >>> Nicolas
> >>
> >
> >
> > --
> > haXe - an open source web programming language
> > http://haxe.org
>
>
> --
> haXe - an open source web programming language
> http://haxe.org

--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

blackdog-2

I proxy node.js behind nginx as I treat node.js as an application
server. This provides much greater flexibility in deployment of
applications over various machines although fronted by a single "site".

For example, currently I have an nginx front end on my bingo product
which proxies /chat to a given node.js chat server and /servlets/bingo
which proxies to a node.js bingo server. Embedding everything in mod_ is
daft IMO. An nginx proxy setup looks like this in nginx.conf


   location /servlet/bingo/ {
      add_header    Cache-Control  no-cache;
      proxy_pass        http://localhost:8084/servlet/bingo/;
      proxy_set_header  X-Real-IP  $remote_addr;
      proxy_pass_header Content-Length;
      proxy_buffering off;
   }
       
   location /chat {
      add_header    Cache-Control  no-cache;
      proxy_pass        http://localhost:9000/chat;
      proxy_set_header  X-Real-IP  $remote_addr;
   }



On Thu, 2010-05-20 at 16:00 +0200, Niel Drummond wrote:

> On Thu, May 20, 2010 at 07:40:11AM -0600, John A. De Goes wrote:
> >
> > Indeed, server-side JavaScript is much faster than Neko. Also, an asynchronous architected standard-library is more portable than synchronous, since asynchronous can always be emulated with synchronous, but the reverse is not true.
> >
> > Regards,
> >
> > John
> >
> > On May 20, 2010, at 7:30 AM, blackdog wrote:
> >
> > > Hugh says ...
> > >
> > > "Javascript seems to have some some very impressive type inference and
> > > can run at remarkable speeds these days."
> > >
> > > Or maybe just use an existing server side target like node.js with my
> > > hxnode signatures? You may find it's faster, i haven't tested. My guess
> > > is that the experience of the people doing V8 should not be quibbled
> > > with - i think it's quadrupled in speed since first announced.
> > >
>
> What is the standard way of deploying node.js ? Do you use fastcgi or standard cgi ? IMO this is the weak point of server-side js, otherwise from raw benchmarks javascript does quite well.
>
> - Niel
>
> > > In hxNode I've implemented FileSystem but not all of haxe.io,
> > > implementing the missing pieces would be quicker than the llvm port, but
> > > I certainly applaud that effort or anything associated with haxe/llvm.
> > > The main issue with node.js in terms of existing haxe is it's all async
> > > and does not fit the existing haxe api very well, I think an official
> > > async api/interface for haxe from Nicolas would be welcome even if not
> > > implemented officially.
> > >
> > > bd
> > >
> > >
> > > On Thu, 2010-05-20 at 21:01 +0800, Hugh Sanderson wrote:
> > >> Hi,
> > >> Yes, I do not really see a big performance gain by using LLVM/JIT
> > >> unless you get the types right.  Consider the addition below:
> > >>
> > >> class A { public var x:Float; }
> > >> var a = new A();
> > >>
> > >> var y = a.x + a.x;
> > >>
> > >> On the simplest level,
> > >> you would run
> > >> Variant temp1 = find member "x" in variant a
> > >> Variant temp2 = find member "x" in variant a
> > >> Variant y = sum_double temp1, temp2
> > >>
> > >> Whether you "switch(OP_CODE)" or JIT the operations, your code will
> > >> still be limited by dominated by "find member 'x' in a", and also the
> > >> sum operation, which will have to do something like:
> > >> VariantOfDouble( DoubleOfVariant(temp1), DoubleOfVariant(temp2))
> > >>
> > >> This would need to be inlined to get any performance gain.
> > >>
> > >> If temp1 & temp2 were kept as native doubles, then you would see some nice  
> > >> gains.
> > >>
> > >> Finally, to get near native speed, you might consider a hybrid of fixed
> > >> and dynamic lookup.  So the implementation of the "A" class may go  
> > >> something like:
> > >>
> > >> struct A
> > >> {
> > >>   PrototypeMap *name_to_member_map;
> > >>   InstanceMap  *additional_member_map;
> > >>   double x;
> > >> };
> > >>
> > >> And the runtime can go "I know that 'a' is of type 'A', therefore I will  
> > >> find
> > >> the offset and location of a double at "base + 8".  In the case of unknown  
> > >> type, it can
> > >> look in the prototype map to find the offset, and then perhaps in the  
> > >> instance
> > >> map if you have dynamic members.
> > >>
> > >> Javascript seems to have some some very impressive type inference and
> > >> can run at remarkable speeds these days.  It might be worth studying
> > >> these implementations for additional ideas, because it seems neko is
> > >> at where JS was a few years ago.
> > >>
> > >> Hugh
> > >>
> > >>
> > >>>> Our primary aim is to speed up our haXe code targetting Neko on 64 bit
> > >>>> machines. We don't know if we will achieve that, but we hope we will.
> > >>>> The biggest challenge is probably that Neko is a dynamic language,
> > >>>> rather than a purely statically typed language like Haskell, so it is
> > >>>> hard to tell whether we will achieve the speedup, and if we do, how
> > >>>> much it will be. But it seems like the best bet for our situation, and
> > >>>> we hope that the contribution will be valuable for others as well.
> > >>>
> > >>> One possibility I was thinking about was to be able to specify type  
> > >>> restrictions on neko function parameters. This would at least enable to  
> > >>> remove some checks and do some type inference on the local variables as  
> > >>> well. This could be supported in the JIT/Interpreter as well with  
> > >>> minimum changes.
> > >>>
> > >>>> So in conclusion, I hope that we will be able to reach a substantial
> > >>>> speedup. In our testing, PHP is faster than haXe on neko in many
> > >>>> practical cases, so we really should try to speed things up.
> > >>>
> > >>> This is surprising news since our benchmarks doesn't show such results.  
> > >>> I would be interested in knowing what measurements you made and which  
> > >>> results you got. Is this specific to 64-bit or also apply to 32-bit  
> > >>> systems ?
> > >>>
> > >>> Best,
> > >>> Nicolas
> > >>
> > >
> > >
> > > --
> > > haXe - an open source web programming language
> > > http://haxe.org
> >
> >
> > --
> > haXe - an open source web programming language
> > http://haxe.org
>


--
haXe - an open source web programming language
http://haxe.org
Reply | Threaded
Open this post in threaded view
|

Re: [Neko] First proof of concept of a LLVM driven backend for the neko virtual machine

Niel Drummond-3
blackdog wrote:
> I proxy node.js behind nginx as I treat node.js as an application
> server. This provides much greater flexibility in deployment of
> applications over various machines although fronted by a single "site".
>  
That is very interesting, and something I've been meaning to try out -
though the fact that the VM is only single threaded does worry me a
little in the environment you have described.

At any rate, compared to other languages that target javascript, haxe is
a good platform to develop server-side javascript, and IMO something
that should be advertised more on haxe.org.

- Niel

> For example, currently I have an nginx front end on my bingo product
> which proxies /chat to a given node.js chat server and /servlets/bingo
> which proxies to a node.js bingo server. Embedding everything in mod_ is
> daft IMO. An nginx proxy setup looks like this in nginx.conf
>
>
>    location /servlet/bingo/ {
>       add_header    Cache-Control  no-cache;
>       proxy_pass        http://localhost:8084/servlet/bingo/;
>       proxy_set_header  X-Real-IP  $remote_addr;
>       proxy_pass_header Content-Length;
>       proxy_buffering off;
>    }
>
>    location /chat {
>       add_header    Cache-Control  no-cache;
>       proxy_pass        http://localhost:9000/chat;
>       proxy_set_header  X-Real-IP  $remote_addr;
>    }
>
>
>
> On Thu, 2010-05-20 at 16:00 +0200, Niel Drummond wrote:
>  
>> On Thu, May 20, 2010 at 07:40:11AM -0600, John A. De Goes wrote:
>>    
>>> Indeed, server-side JavaScript is much faster than Neko. Also, an asynchronous architected standard-library is more portable than synchronous, since asynchronous can always be emulated with synchronous, but the reverse is not true.
>>>
>>> Regards,
>>>
>>> John
>>>
>>> On May 20, 2010, at 7:30 AM, blackdog wrote:
>>>
>>>      
>>>> Hugh says ...
>>>>
>>>> "Javascript seems to have some some very impressive type inference and
>>>> can run at remarkable speeds these days."
>>>>
>>>> Or maybe just use an existing server side target like node.js with my
>>>> hxnode signatures? You may find it's faster, i haven't tested. My guess
>>>> is that the experience of the people doing V8 should not be quibbled
>>>> with - i think it's quadrupled in speed since first announced.
>>>>
>>>>        
>> What is the standard way of deploying node.js ? Do you use fastcgi or standard cgi ? IMO this is the weak point of server-side js, otherwise from raw benchmarks javascript does quite well.
>>
>> - Niel
>>
>>    
>>>> In hxNode I've implemented FileSystem but not all of haxe.io,
>>>> implementing the missing pieces would be quicker than the llvm port, but
>>>> I certainly applaud that effort or anything associated with haxe/llvm.
>>>> The main issue with node.js in terms of existing haxe is it's all async
>>>> and does not fit the existing haxe api very well, I think an official
>>>> async api/interface for haxe from Nicolas would be welcome even if not
>>>> implemented officially.
>>>>
>>>> bd
>>>>
>>>>
>>>> On Thu, 2010-05-20 at 21:01 +0800, Hugh Sanderson wrote:
>>>>        
>>>>> Hi,
>>>>> Yes, I do not really see a big performance gain by using LLVM/JIT
>>>>> unless you get the types right.  Consider the addition below:
>>>>>
>>>>> class A { public var x:Float; }
>>>>> var a = new A();
>>>>>
>>>>> var y = a.x + a.x;
>>>>>
>>>>> On the simplest level,
>>>>> you would run
>>>>> Variant temp1 = find member "x" in variant a
>>>>> Variant temp2 = find member "x" in variant a
>>>>> Variant y = sum_double temp1, temp2
>>>>>
>>>>> Whether you "switch(OP_CODE)" or JIT the operations, your code will
>>>>> still be limited by dominated by "find member 'x' in a", and also the
>>>>> sum operation, which will have to do something like:
>>>>> VariantOfDouble( DoubleOfVariant(temp1), DoubleOfVariant(temp2))
>>>>>
>>>>> This would need to be inlined to get any performance gain.
>>>>>
>>>>> If temp1 & temp2 were kept as native doubles, then you would see some nice  
>>>>> gains.
>>>>>
>>>>> Finally, to get near native speed, you might consider a hybrid of fixed
>>>>> and dynamic lookup.  So the implementation of the "A" class may go  
>>>>> something like:
>>>>>
>>>>> struct A
>>>>> {
>>>>>   PrototypeMap *name_to_member_map;
>>>>>   InstanceMap  *additional_member_map;
>>>>>   double x;
>>>>> };
>>>>>
>>>>> And the runtime can go "I know that 'a' is of type 'A', therefore I will  
>>>>> find
>>>>> the offset and location of a double at "base + 8".  In the case of unknown  
>>>>> type, it can
>>>>> look in the prototype map to find the offset, and then perhaps in the  
>>>>> instance
>>>>> map if you have dynamic members.
>>>>>
>>>>> Javascript seems to have some some very impressive type inference and
>>>>> can run at remarkable speeds these days.  It might be worth studying
>>>>> these implementations for additional ideas, because it seems neko is
>>>>> at where JS was a few years ago.
>>>>>
>>>>> Hugh
>>>>>
>>>>>
>>>>>          
>>>>>>> Our primary aim is to speed up our haXe code targetting Neko on 64 bit
>>>>>>> machines. We don't know if we will achieve that, but we hope we will.
>>>>>>> The biggest challenge is probably that Neko is a dynamic language,
>>>>>>> rather than a purely statically typed language like Haskell, so it is
>>>>>>> hard to tell whether we will achieve the speedup, and if we do, how
>>>>>>> much it will be. But it seems like the best bet for our situation, and
>>>>>>> we hope that the contribution will be valuable for others as well.
>>>>>>>              
>>>>>> One possibility I was thinking about was to be able to specify type  
>>>>>> restrictions on neko function parameters. This would at least enable to  
>>>>>> remove some checks and do some type inference on the local variables as  
>>>>>> well. This could be supported in the JIT/Interpreter as well with  
>>>>>> minimum changes.
>>>>>>
>>>>>>            
>>>>>>> So in conclusion, I hope that we will be able to reach a substantial
>>>>>>> speedup. In our testing, PHP is faster than haXe on neko in many
>>>>>>> practical cases, so we really should try to speed things up.
>>>>>>>              
>>>>>> This is surprising news since our benchmarks doesn't show such results.  
>>>>>> I would be interested in knowing what measurements you made and which  
>>>>>> results you got. Is this specific to 64-bit or also apply to 32-bit  
>>>>>> systems ?
>>>>>>
>>>>>> Best,
>>>>>> Nicolas
>>>>>>            
>>>> --
>>>> haXe - an open source web programming language
>>>> http://haxe.org
>>>>        
>>> --
>>> haXe - an open source web programming language
>>> http://haxe.org
>>>      
>
>
>  


--
haXe - an open source web programming language
http://haxe.org
12