Today, the doClone method does a bunch of side effects:
Disabling LFS smudge for git, triggering a Perforce changelist mapper, redacting logs assuming all implementations of VCSSyncer have the same kind of potential leakage, and so forth.
This replaces the CloneCommand function on the syncer by a proper Clone method.
It moves the responsibility to the syncer to do the above things (not yet the changelist mapper, because that'll need a bit more refactoring I don't want to do in this PR).
There are two main motivations for this PR:
- Make the cloning process less dependent on the implementation, all we care about is that there's a valid git repo in tmpDir after invocation
- Allow us to (without the dirty hacks we did in packages syncer before where we run the actual command in the getter method and return a bogus command) run multiple steps in the cloning process (which we already implicitly did, just without any visibility). This will enable us to repack an exported perforce depot after successful conversion.
Nice side-effect: The packages syncer feels a little less hacky, too.
This is a mechanical move to get the executor out of the enterprise/cmd
directory. Eventually, this directory should disappear, this is another
step towards that.
This does not change anything about how it's licensed.
## Test plan
CI is still passing, local executor starts up.
The previous approach to enable race detection was too radical and
accidently led to build our binaries with the race flage enabled, which
caused issues when building images down the line.
This happened because putting a `test --something` in bazelrc also sets
it on `build` which is absolutely not what we wanted. Usually folks get
this one working by having a `--stamp` config setting that fixes this
when releasing binaries, which we don't at this stage, as we're still
learning Bazel.
Luckily, this was caught swiftly. The current approach insteads takes a
more granular approach, which makes the `go_test` rule uses our own
variant, which injects the `race = "on"` attribute, but only on
`go_test`.
## Test plan
<!-- All pull requests REQUIRE a test plan:
https://docs.sourcegraph.com/dev/background-information/testing_principles
-->
CI, being a main-dry-run, this will cover the container building jobs,
which were the ones failing.
---------
Co-authored-by: Alex Ostrikov <alex.ostrikov@sourcegraph.com>
Right now, we use the default scanner split function, which is
`bufio.SplitLines`. This removes the trailing newline and any preceding
`\r` before it, then returns the remainder of the token. `PipeOutput`
then puts the newline back by invoking `fmt.Fprintln`.
This has the positive effect that we essentially normalise `\r\n` and
`\n` into `\n`, but the negative effect that every line of output
becomes newline-terminated, even if it's a final line that doesn't
include a newline.
This matters because this output is then used for output variables when
executing batch specs, which means that something like `echo -n hi` will
end up with an extraneous newline, even though `echo -n` has only
written two bytes to stdout.
Instead, let's set the split function to one that splits by line, but
_retains_ the newline, and then we can write it to the piped writer(s)
with `fmt.Fprint` directly.
The one tradeoff here is that I haven't reimplemented the `\r`
swallowing that `bufio.SplitLines` does. In practice, I don't believe we
ever actually used this behaviour: Docker _may_ return extraneous `\r`
bytes when running with TTY mode enabled, but we never enable that when
executing batch specs, and if the user's container actually outputs `\r`
we should actually retain it, rather than trying to filter their output.
Fixes#36562.
This restores the buffered PipeOutput function that I accidentally
removed in #31081 because I didn't notice it is used in src-cli.
This adds it back and also adds a tests for both methods so we're sure
that they actually do what they say they do.