Just say you were autogenerating some Chisel code for some infrastructure in your chip. A single file instantiating a load of memory mapped registers and then IO assignments.
Then say one day you add an extra register and the JVM goes bang and doesn't want to build it any more because of an aribtrary 64k method limit size in the JVM:
[error] Could not write class HasRegsModuleContents because it exceeds JVM code size limits. Method scala/Some's code too large!
[error] one error found
[error] (chipBlocks / Compile / compileIncremental) Compilation failed
[error] Total time: 41 s, completed 27/11/2018 2:32:29 AM
Inside HasRegsModuleContents is a declaration of a bunch of registers and then a big regmap statement with a bunch of register declarations for the chip. Afterwards is the assignment to or from the module's io port.
This was working very well for us but now appears to be maxxed out which is rather annoying.
Has anyone come across this before? It's going to be work to break this up into multiple register blocks (and more hardware, having multiple bus interfaces on the pbus now), so would appreciate if anyone knows a way around it.
I believe that same restriction lead to the splitting of the rocket-chip decode tables into multiple classes.
Unfortunately I don't think there's an easy workaround since this is a historical design mistake in the JVM itself. I can think of 2 ways to fix the problem, but I think either of them is a bit of work:
Have you generator partition the register creation logic into methods
Rather than generating Chisel source code, write Chisel itself to generate the necessary registers and related logic
#2 is probably The Right Way™ but #1 might be more approachable given your current infrastructure set up.