ramOverride currently prevents you from actually using exactly all of a server's memory due to a bug in the ram check. Simply replace ">=" with ">" to allow the new ram to equal the max ram.
Test script:
```js
/** @param {NS} ns */
export async function main(ns) {
ns.tprint(ns.ramOverride(8));
}
```
Expected output (on fresh save):
```js
8
```
Output before this commit:
```js
1.6
```
* DOCUMENTATION: Clarify getOwnedSourceFiles when player overrides active levels of SFs
* Return Player.activeSourceFiles instead of Player.sourceFiles
* Get rid of zeroes in the map
* added utility info
* moved info to running script
* fix for RAM cost
* description changes
Co-authored-by: David Walker <d0sboots@gmail.com>
* fixed wrong formatting
* Added parent to ignored fields
---------
Co-authored-by: David Walker <d0sboots@gmail.com>
The current implementation was naive; disableLog("ALL") was storing a
key for every function, and iterating over a different object to do it
(when iterating over objects is quite slow).
The common cases of Bitburner (and especially batching, where efficiency
matters most) are either never disabling anything, or disabling "ALL".
This optimizes for these two cases, at the expense of slightly more
complicated code to deal with the less-common edge cases.
This adds a way to dynamically change the static RAM limit of a script,
which is also its current RAM usage. This makes it possible for scripts
to dynamically change their memory footprint, opening up new strategies
beyond current ram-dodging.
Calling functions still permanently increases the *dynamic* memory
limit; RAM-dodging is still the optimal strategy for avoiding RAM costs,
in that sense.
This also adds dynamicRamUsage to the info returned by
`getRunningScript`, to allow introspection on the currently needed ram.
This eliminates a hole where spawn was unrelaible, because other scripts
could jump in and steal the RAM. It's not an API break, because 0 used
to be an invalid value.
A significant portion of players who use ports are passing objects through them. Currently they are required to handle that themselves via JSON serialization. This PR adds better support for passing objects; which is more convenient, extensive, and optimized (probably, more on this one later).
This adds zero overhead to existing (or when passing any primitive types) port usage, and also isn't a breaking change. The questions to debate here are:
Should objects be supported in the first place?
If so, how exactly do we want to serialize objects?
Based on an extensive discussion in Discord, the overwhelming majority answered "yes" to question one. As for question two, that has been much more hotly contested.
Ultimately, `structuredClone` was used, despite less-than-stellar performance, because other options were worse either in safety, speed, error-handling, or multiple of the above.
so far we calculate the effect of weaken in three +1 places
ns.weaken
ns.weakenAnalyze
terminal weaken
and server.weaken where the bn mult is applied
i extracted the logic into a new netscript helper function getWeakenEffect
this gives us one place if we want to change the formula
a side effect i added the server.cpuCores to the terminal weaken to future proof it if the npc server core pr (#963) is merged
* ns.ls filter can include leading slash in filename
* scp from terminal accepts multiple filenames
* terminal displays root / instead of ~ as base
* cd with no args returns to root