Our IndexDB handling did not have very good error handling. It wasn't
reporting the actual errors that occured, nor was it using actual Error
objects. In some cases it also had overly convoluted Promise use, and it
didn't need to be .tsx either.
The biggest issue was that if any problem occured during the main
load(), this would end up as an unhandled rejection and so it would only
be logged to the console. This extends the previous catch to also cover
this, so that the recovery screen is activated.
We are getting some more error reports coming in that don't have enough
info in them. It turns out that populating the stack trace was gated
behind the dev flag; in reality, production builds are where we need it
most. Even if it ends up being obfuscated (source maps should prevent
this), we can figure out the actual source lines with enough effort if
need be.
This also changes to using the actual stack trace, rather than the
"component" trace (the tree of JSX objects), since knowing where the
code failed is far more valuable. Also, it ensures we get the full error
details when things go wrong in savefile loading.
If the game takes long enough to load, certain counters can become
eligible to run as soon as Engine.start() runs. When this happens,
eventually Router.page() is called, which throws an Error since Router
isn't initialized yet. (Dropping a breakpoint before Engine.start() and
waiting at least 30 seconds is enough to reliably repro, but I have seen
this both live and in tests.)
This fixes it so that Router.page() is valid immediately, returning a
value of Page.LoadingScreen. It also removes the isInitialized field,
since this is now redundant. Trying to switch pages is still an error,
but that doesn't happen without user input, whereas checking the current
page is quite common.
This also consolidates a check for "should we show toasts" behind a
function in Router, making the logic central and equal for a few
usecases. This means (for instance) that the "autosave is disabled"
logic won't run during infiltration. (The toast should have already been
suppressed.)
* Fix the type declaration of `!!raw-loader!` modules.
Instead of declaring them to export an object with a single key
`default` which is a string, the modules have a default export, which
is a string.
Note, that this doesn't actually change the generated code, just the
types that typescript sees. The code worked before because the only
thing done to the values was to coerce the values to a string, which
turned into a no-op.
* Switch from using `raw-loader` to using a source asset module.
`raw-loader` was deprecated in webpack v5.
To use this, add a line like "ns.ramOverride(2);" as the first statement
in main(). Not only will it take effect at runtime, but it will now
*also* be parsed at compile time, changing the script's static RAM
limit. Since ns.ramOverride is a 0-cost function, the call to it on
startup then becomes a no-op.
This is an often-requested feature, and allows for scripts to set their
usage without it needing to be explicitly mentioned via args or options
when being launched. This also reduces pressure on the static RAM
analysis to be perfect all the time. (But certain limits, such as
"functions names must be unique across namespaces," remain.)
This also adds a tooltip to the RAM calculation, to make this slightly
discoverable.
browserslist complains if caniuse-lite is more than 6 months old. This
updates to the latest version.
Note that there were no changes in supported versions based on this
update.
The weight of the intelligence bonus is a multiplier to the percentage increase. So, rather than calculate it with a weight of 3 and then divide by 3, we can just calculate it with a weight of 1.
This adds a way to dynamically change the static RAM limit of a script,
which is also its current RAM usage. This makes it possible for scripts
to dynamically change their memory footprint, opening up new strategies
beyond current ram-dodging.
Calling functions still permanently increases the *dynamic* memory
limit; RAM-dodging is still the optimal strategy for avoiding RAM costs,
in that sense.
This also adds dynamicRamUsage to the info returned by
`getRunningScript`, to allow introspection on the currently needed ram.
This eliminates a hole where spawn was unrelaible, because other scripts
could jump in and steal the RAM. It's not an API break, because 0 used
to be an invalid value.
*All* RAM calculations must take place in units of hundredths-of-a-GB in
order for there not to be issues.
Also adds slightly more verbose logging when the dynamic RAM check
fails.