Charles Oliver Nutter
2014-02-06 23:46:43 UTC
A lot more code is working now that I've just opted to back off
specializing anything beyond what the IR does itself. This means that
passing a block to nested closures is done by reifying it into a Proc,
sticking it in the DynamicScope, and unboxing it on the other side...among
other things. Methods always use boxed argument lists; invokedynamic is
used to handle boxing arguments on the calling side and unboxing them on
the receiving side. This *might* make that boxing eligible for escape
analysis to eliminate the allocation, but I have not dug deeper to see.
I also have closures basically working now, though non-local flow control
still requires too much data from IR structures to implement right now
(non-local break and return, for example).
I also ran into an issue with how heap variable assignment is being
optimized:
for the closure passed to foo: a = 'here'; foo { a = 'bar'; puts a }
Linearized instructions for JIT:
0 thread_poll
1 line_num(0)
2 %t_a_1 = "bar"
3 %cl_1_0 = call_1o(FUNCTIONAL, 'puts', %self, [%t_a_1]){1O}
4 store_lvar(%t_a_1, -e_CLOSURE_1, a(1:0))
5 return(%cl_1_0)
6 %cl_1_2 = recv_jruby_exc
7 store_lvar(%t_a_1, -e_CLOSURE_1, a(1:0))
8 runtime_helper(catchUncaughtBreakInLambdas, [%cl_1_2])
The exception table here handles exceptions from 0 to 5 by branching to 6.
However, 7 attempts to load a value first assigned within that catch range.
When I try to compile this to JVM bytecode, it fails because there's no
guarantee that %t_a_1 has been assigned before 7 runs.
In any case, things are moving along well. There's a few things to fix and
lots of things to optimize, but progress is being made.
- Charlie
specializing anything beyond what the IR does itself. This means that
passing a block to nested closures is done by reifying it into a Proc,
sticking it in the DynamicScope, and unboxing it on the other side...among
other things. Methods always use boxed argument lists; invokedynamic is
used to handle boxing arguments on the calling side and unboxing them on
the receiving side. This *might* make that boxing eligible for escape
analysis to eliminate the allocation, but I have not dug deeper to see.
I also have closures basically working now, though non-local flow control
still requires too much data from IR structures to implement right now
(non-local break and return, for example).
I also ran into an issue with how heap variable assignment is being
optimized:
for the closure passed to foo: a = 'here'; foo { a = 'bar'; puts a }
Linearized instructions for JIT:
0 thread_poll
1 line_num(0)
2 %t_a_1 = "bar"
3 %cl_1_0 = call_1o(FUNCTIONAL, 'puts', %self, [%t_a_1]){1O}
4 store_lvar(%t_a_1, -e_CLOSURE_1, a(1:0))
5 return(%cl_1_0)
6 %cl_1_2 = recv_jruby_exc
7 store_lvar(%t_a_1, -e_CLOSURE_1, a(1:0))
8 runtime_helper(catchUncaughtBreakInLambdas, [%cl_1_2])
The exception table here handles exceptions from 0 to 5 by branching to 6.
However, 7 attempts to load a value first assigned within that catch range.
When I try to compile this to JVM bytecode, it fails because there's no
guarantee that %t_a_1 has been assigned before 7 runs.
In any case, things are moving along well. There's a few things to fix and
lots of things to optimize, but progress is being made.
- Charlie