While debugging an error with verify_signed_buffer() the error
messages from run-command weren't very useful:
error: cannot create pipe for gpg: Too many open files
error: could not run gpg.
because they didn't indicate *which* pipe couldn't be created.
Print which pipe failed to be created in the error message so we
can more easily debug similar problems in the future.
For example, the above error now prints:
error: cannot create standard error pipe for gpg: Too many open files
error: could not run gpg.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create find_hook() function to determine if a given hook exists and is
executable. If it is, the path to the script will be returned,
otherwise NULL is returned.
This encapsulates the tests that are used to check for the existence of
a hook in one place, making it easier to modify those checks if that is
found to be necessary. This also makes it simple for places that can
use a hook to check if a hook exists before doing, possibly lengthy,
setup work which would be pointless if no such hook is present.
The returned value is left as a static value from get_pathname() rather
than a duplicate because it is anticipated that the return value will
either be used as a boolean, immediately added to an argv_array list
which would result in it being duplicated at that point, or used to
actually run the command without much intervening work. Callers which
need to hold onto the returned value for a longer time are expected to
duplicate the return value themselves.
Signed-off-by: Aaron Schrab <aaron@schrab.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a sub-command dies due to a signal, we encode the
signal number into the numeric exit status as "signal -
128". This is easy to identify (versus a regular positive
error code), and when cast to an unsigned integer (e.g., by
feeding it to exit), matches what a POSIX shell would return
when reporting a signal death in $? or through its own exit
code.
So we have a negative value inside the code, but once it
passes across an exit() barrier, it looks positive (and any
code we receive from a sub-shell will have the positive
form). E.g., death by SIGPIPE (signal 13) will look like
-115 to us in inside git, but will end up as 141 when we
call exit() with it. And a program killed by SIGPIPE but run
via the shell will come to us with an exit code of 141.
Unfortunately, this means that when the "use_shell" option
is set, we need to be on the lookout for _both_ forms. We
might or might not have actually invoked the shell (because
we optimize out some useless shell calls). If we didn't invoke
the shell, we will will see the sub-process's signal death
directly, and run-command converts it into a negative value.
But if we did invoke the shell, we will see the shell's
128+signal exit status. To be thorough, we would need to
check both, or cast the value to an unsigned char (after
checking that it is not -1, which is a magic error value).
Fortunately, most callsites do not care at all whether the
exit was from a code or from a signal; they merely check for
a non-zero status, and sometimes propagate the error via
exit(). But for the callers that do care, we can make life
slightly easier by just using the consistent positive form.
This actually fixes two minor bugs:
1. In launch_editor, we check whether the editor died from
SIGINT or SIGQUIT. But we checked only the negative
form, meaning that we would fail to notice a signal
death exit code which was propagated through the shell.
2. In handle_alias, we assume that a negative return value
from run_command means that errno tells us something
interesting (like a fork failure, or ENOENT).
Otherwise, we simply propagate the exit code. Negative
signal death codes confuse us, and we print a useless
"unable to run alias 'foo': Success" message. By
encoding signal deaths using the positive form, the
existing code just propagates it as it would a normal
non-zero exit code.
The downside is that callers of run_command can no longer
differentiate between a signal received directly by the
sub-process, and one propagated. However, no caller
currently cares, and since we already optimize out some
calls to the shell under the hood, that distinction is not
something that should be relied upon by callers.
Fix the same logic in t/test-terminal.perl for consistency [jc:
raised by Jonathan in the discussion].
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 1327452 cleaned up an unused parameter from
wait_or_whine, but forgot to update a caller that is inside
"#ifdef NO_PTHREADS".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
SIGINT and SIGQUIT are not generally interesting signals to
the user, since they are typically caused by them hitting "^C"
or otherwise telling their terminal to send the signal.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do not actually use this parameter; instead we complain
from the child itself (for fork/exec) or from start_command
(if we are using spawn on Windows).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code to wait for subprocess and remove it from our internal queue
wasn't quite right.
* dg/run-command-child-cleanup:
run-command.c: fix broken list iteration in clear_child_for_cleanup
Iterate through children_to_clean using 'next' fields but with an
extra level of indirection. This allows us to update the chain when
we remove a child and saves us managing several variables around
the loop mechanism.
Signed-off-by: David Gould <david@optimisefitness.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git foo" errored out with "Not a directory" when the user had a non
directory on $PATH, and worse yet it masked an alias "foo" to run.
* jc/maint-sane-execvp-notdir:
sane_execvp(): ignore non-directory on $PATH
When you have a non-directory on your PATH, a funny thing happens:
$ PATH=$PATH:/bin/sh git foo
fatal: cannot exec 'git-foo': Not a directory?
Worse yet, as real commands always take precedence over aliases,
this behaviour interacts rather badly with them:
$ PATH=$PATH:/bin/sh git -c alias.foo=show git foo -s
fatal: cannot exec 'git-foo': Not a directory?
This is because an ENOTDIR error from the underlying execvp(2) is
reported back to the caller of our sane_execvp() wrapper as-is.
Translating it to ENOENT, just like the case where we _might_ have
the command in an unreadable directory, fixes it. Without an alias,
we would get
git: 'foo' is not a git command. See 'git --help'.
and we use the 'foo' alias when it is available, of course.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 35ce862 (pager: Work around window resizing bug in
'less', 2007-01-24) causes git's pager sub-process to wait
to receive input after forking but before exec-ing the
pager. To handle this, run-command had to grow a "pre-exec
callback" feature. Unfortunately, this feature does not work
at all on Windows (where we do not fork), and interacts
poorly with run-command's parent notification system. Its
use should be discouraged.
The bug in less was fixed in version 406, which was released
in June 2007. It is probably safe at this point to remove
our workaround. That lets us rip out the preexec_cb feature
entirely.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Mops up an unfortunate fallout from bw/spawn-via-shell-path topic.
By Johannes Sixt
* js/spawn-via-shell-path-fix:
Do not use SHELL_PATH from build system in prepare_shell_cmd on Windows
When PATH contains an unreadable directory, alias expansion code did not
kick in, and failed with an error that said "git-subcmd" was not found.
By Jeff King (1) and Ramsay Jones (1)
* jk/run-command-eacces:
run-command: treat inaccessible directories as ENOENT
compat/mingw.[ch]: Change return type of exec functions to int
The recent change to use SHELL_PATH instead of "sh" to spawn shell commands
is not suited for Windows:
- The default setting, "/bin/sh", does not work when git has to run the
shell because it is a POSIX style path, but not a proper Windows style
path.
- If it worked, it would hard-code a position in the files system where
the shell is expected, making git (more precisely, the POSIX toolset that
is needed alongside git) non-relocatable. But we cannot sacrifice
relocatability on Windows.
- Apart from that, even though the Makefile leaves SHELL_PATH set to
"/bin/sh" for the Windows builds, the build system passes a mangled path
to the compiler, and something like "D:/Src/msysgit/bin/sh" is used,
which is doubly bad because it points to where /bin/sh resolves to on
the system where git was built.
- Finally, the system's CreateProcess() function that is used under
mingw.c's hood does not work with forward slashes and cannot find the
shell.
Undo the earlier change on Windows.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When execvp reports EACCES, it can be one of two things:
1. We found a file to execute, but did not have
permissions to do so.
2. We did not have permissions to look in some directory
in the $PATH.
In the former case, we want to consider this a
permissions problem and report it to the user as such (since
getting this for something like "git foo" is likely a
configuration error).
In the latter case, there is a good chance that the
inaccessible directory does not contain anything of
interest. Reporting "permission denied" is confusing to the
user (and prevents our usual "did you mean...?" lookup). It
also prevents git from trying alias lookup, since we do so
only when an external command does not exist (not when it
exists but has an error).
This patch detects EACCES from execvp, checks whether we are
in case (2), and if so converts errno to ENOENT. This
behavior matches that of "bash" (but not of simpler shells
that use execvp more directly, like "dash").
Test stolen from Junio.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
During the testing of the 1.7.10 rc series on Solaris for OpenCSW, it
was discovered that t7006-pager was failing due to finding a bad "sh"
in PATH after a call to execvp("sh", ...). This call was setup by
run_command.c:prepare_shell_cmd.
The PATH in use at the time saw /opt/csw/bin given precedence to
traditional Solaris paths such as /usr/bin and /usr/xpg4/bin. A
package named schilyutils (Joerg Schilling's utilities) was installed
on the build system and it delivered a modified version of the
traditional Solaris /usr/bin/sh as /opt/csw/bin/sh. This version of
sh suffers from many of the same problems as /usr/bin/sh.
The command-specific pager test failed due to the broken "sh" handling
^ as a pipe character. It tried to fork two processes when it
encountered "sed s/^/foo:/" as the pager command. This problem was
entirely dependent on the PATH of the user at runtime.
Possible fixes for this issue are:
1. Use the standard system() or popen() which both launch a POSIX
shell on Solaris as long as _POSIX_SOURCE is defined.
2. The git wrapper could prepend SANE_TOOL_PATH to PATH thus forcing
all unqualified commands run to use the known good tools on the
system.
3. The run_command.c:prepare_shell_command() could use the same
SHELL_PATH that is in the #! line of all all scripts and not rely
on PATH to find the sh to run.
Option 1 would preclude opening a bidirectional pipe to a filter
script and would also break git for Windows as cmd.exe is spawned from
system() (cf. v1.7.5-rc0~144^2, "alias: use run_command api to execute
aliases, 2011-01-07).
Option 2 is not friendly to users as it would negate their ability to
use tools of their choice in many cases. Alternately, injecting
SANE_TOOL_PATH such that it takes precedence over /bin and /usr/bin
(and anything with lower precedence than those paths) as
git-sh-setup.sh does would not solve the problem either as the user
environment could still allow a bad sh to be found. (Many OpenCSW
users will have /opt/csw/bin leading their PATH and some subset would
have schilyutils installed.)
Option 3 allows us to use a known good shell while still honouring the
users' PATH for the utilities being run. Thus, it solves the problem
while not negatively impacting either users or git's ability to run
external commands in convenient ways. Essentially, the shell is a
special case of tool that should not rely on SANE_TOOL_PATH and must
be called explicitly.
With this patch applied, any code path leading to
run_command.c:prepare_shell_cmd can count on using the same sane shell
that all shell scripts in the git suite use. Both the build system
and run_command.c will default this shell to /bin/sh unless
overridden.
Signed-off-by: Ben Walton <bwalton@artsci.utoronto.ca>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Several git commands are so-called dashed externals, that is commands
executed as a child process of the git wrapper command. If the git
wrapper is killed by a signal, the child process will continue to run.
This is different from internal commands, which always die with the git
wrapper command.
Enable the recently introduced cleanup mechanism for child processes in
order to make dashed externals act more in line with internal commands.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we spawn a helper process, it should generally be done
and finish_command called before we exit. However, if we
exit abnormally due to an early return or a signal, the
helper may continue to run in our absence.
In the best case, this may simply be wasted CPU cycles or a
few stray messages on a terminal. But it could also mean a
process that the user thought was aborted continues to run
to completion (e.g., a push's pack-objects helper will
complete the push, even though you killed the push process).
This patch provides infrastructure for run-command to keep
track of PIDs to be killed, and clean them on signal
reception or input, just as we do with tempfiles. PIDs can
be added in two ways:
1. If NO_PTHREADS is defined, async helper processes are
automatically marked. By definition this code must be
ready to die when the parent dies, since it may be
implemented as a thread of the parent process.
2. If the run-command caller specifies the "clean_on_exit"
option. This is not the default, as there are cases
where it is OK for the child to outlive us (e.g., when
spawning a pager).
PIDs are cleared from the kill-list automatically during
wait_or_whine, which is called from finish_command and
finish_async.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/argv-array:
run_hook: use argv_array API
checkout: use argv_array API
bisect: use argv_array API
quote: provide sq_dequote_to_argv_array
refactor argv_array into generic code
quote.h: fix bogus comment
add sha1_array API docs
This was a pretty straightforward use, so it really doesn't
save that many lines. Still, perhaps it's a little bit more
readable.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the pager fails to run, git produces no output, e.g.:
$ GIT_PAGER=not-a-command git log
The error reporting fails for two reasons:
(1) start_command: There is a mechanism that detects errors during
execvp introduced in 2b541bf8 (start_command: detect execvp
failures early). The child writes one byte to a pipe only if
execvp fails. The parent waits for either EOF, when the
successful execvp automatically closes the pipe (see
FD_CLOEXEC in fcntl(1)), or it reads a single byte, in which
case it knows that the execvp failed. This mechanism is
incompatible with the workaround introduced in 35ce8622
(pager: Work around window resizing bug in 'less'), which
waits for input from the parent before the exec. Since both
the parent and the child are waiting for input from each
other, that would result in a deadlock. In order to avoid
that, the mechanism is disabled by closing the child_notifier
file descriptor.
(2) finish_command: The parent correctly detects the 127 exit
status from the child, but the error output goes nowhere,
since by that time it is already being redirected to the
child.
No simple solution for (1) comes to mind.
Number (2) can be solved by not sending error output to the pager.
Not redirecting error output to the pager can result in the pager
overwriting error output with standard output, however.
Since there is no reliable way to handle error reporting in the
parent, produce the output in the child instead.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The new process's error output may be redirected elsewhere, but if
the exec fails, output should still go to the parent's stderr. This
has already been done for the die_routine. Do the same for
error_routine.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If start_command fails after forking and before exec finishes, there
is not much use in noticing an I/O error on top of that.
finish_command will notice that the child exited with nonzero status
anyway. So as noted in v1.7.0.3~20^2 (run-command.c: fix build
warnings on Ubuntu, 2010-01-30) and v1.7.5-rc0~29^2 (2011-03-16), it
is safe to ignore errors from write in this codepath.
Even so, the result from write contains useful information: it tells
us if the write was cancelled by a signal (EINTR) or was only
partially completed (e.g., when writing to an almost-full pipe).
Let's use write_in_full to loop until the desired number of bytes have
been written (still ignoring errors if that fails).
As a happy side effect, the assignment to a dummy variable to appease
gcc -D_FORTIFY_SOURCE is no longer needed. xwrite and write_in_full
check the return value from write(2).
Noticed with gcc -Wunused-but-set-variable.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Current gcc + glibc with -D_FORTIFY_SOURCE try very aggressively to
protect against a programming style which uses write(...) without
checking the return value for errors. Even the usual hint of casting
to (void) does not suppress the warning.
Sometimes when there is an output error, especially right before exit,
there really is nothing to be done. The obvious solution, adopted in
v1.7.0.3~20^2 (run-command.c: fix build warnings on Ubuntu,
2010-01-30), is to save the return value to a dummy variable:
ssize_t dummy;
dummy = write(...);
But that (1) is ugly and (2) triggers -Wunused-but-set-variable
warnings with gcc-4.6 -Wall, so we are not much better off than when
we started.
Instead, use an "if" statement with an empty body to make the intent
clear.
if (write(...))
; /* yes, yes, there was an error. */
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The POSIX code path did The Right Thing already, but we have to do the same
on Windows.
This bug caused failures in t5526-fetch-submodules, where the output of
'git fetch --recurse-submodules' was in the wrong order.
Debugged-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/async-thread:
fast-import: die_nicely() back to vsnprintf (reverts part of ebaa79f)
Enable threaded async procedures whenever pthreads is available
Dying in an async procedure should only exit the thread, not the process.
Reimplement async procedures using pthreads
Windows: more pthreads functions
Fix signature of fcntl() compatibility dummy
Make report() from usage.c public as vreportf() and use it.
Modernize t5530-upload-pack-error.
Conflicts:
http-backend.c
Fix the problem where the cmd->err passed into start_command wasn't
being properly closed when certain types of errors occurr. (Compare
the affected code with the clean shutdown code later in the function.)
On Windows, this problem would be triggered if mingw_spawnvpe()
failed, which would happen if the command to be executed was malformed
(e.g. a text file that didn't start with a #! line). If cmd->err was
a pipe, the failure to close it could result in a hang while the other
side was waiting (forever) for either input or pipe close, e.g. while
trying to shove the output into the side band. On msysGit, this
problem was causing a hang in t5516-fetch-push.
[J6t: With a slight adjustment of the test case, the hang is also
observed on Linux.]
Signed-off-by: bert Dvornik <dvornik+git@gmail.com>
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A caller of start_command can set the member 'dir' to a directory to
request that the child process starts with that directory as CWD. The first
user of this feature was added recently in eee49b6 (Teach diff --submodule
and status to handle .git files in submodules).
On Windows, we have been lazy and had not implemented support for this
feature, yet. This fixes the shortcoming.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Async procedures are intended as helpers that perform a very restricted
task, and the caller usually has to manage them in a larger context.
Conceptually, the async procedure is not concerned with the "bigger
picture" in whose context it is run. When it dies, it is not supposed
to destroy this "bigger picture", but rather only its own limit view
of the world. On POSIX, the async procedure is run in its own process,
and exiting this process naturally had only these limited effects.
On Windows (or when ASYNC_AS_THREAD is set), calling die() exited the
whole process, destroying the caller (the "big picture") as well.
This fixes it to exit only the thread.
Without ASYNC_AS_THREAD, one particular effect of exiting the async
procedure process is that it automatically closes file descriptors, most
notably the writable end of the pipe that the async procedure writes to.
The async API already requires that the async procedure closes the pipe
ends when it exits normally. But for calls to die() no requirements are
imposed. In the non-threaded case the pipe ends are closed implicitly
by the exiting process, but in the threaded case, the die routine must
take care of closing them.
Now t5530-upload-pack-error.sh passes on Windows.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On Windows, async procedures have always been run in threads, and the
implementation used Windows specific APIs. Rewrite the code to use pthreads.
A new configuration option is introduced so that the threaded implementation
can also be used on POSIX systems. Since this option is intended only as
playground on POSIX, but is mandatory on Windows, the option is not
documented.
One detail is that on POSIX it is necessary to set FD_CLOEXEC on the pipe
handles. On Windows, this is not needed because pipe handles are not
inherited to child processes, and the new calls to set_cloexec() are
effectively no-ops.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Building git on Ubuntu 9.10 warns that the return value of write(2)
isn't checked. These warnings were introduced in commits:
2b541bf8 ("start_command: detect execvp failures early")
a5487ddf ("start_command: report child process setup errors to the
parent's stderr")
GCC details:
$ gcc --version
gcc (Ubuntu 4.4.1-4ubuntu9) 4.4.1
Silence the warnings by reading (but not making use of) the return value
of write(2).
Signed-off-by: Michael Wookey <michaelwookey@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* sp/maint-push-sideband:
receive-pack: Send hook output over side band #2
receive-pack: Wrap status reports inside side-band-64k
receive-pack: Refactor how capabilities are shown to the client
send-pack: demultiplex a sideband stream with status data
run-command: support custom fd-set in async
run-command: Allow stderr to be a caller supplied pipe
Update git fsck --full short description to mention packs
Conflicts:
run-command.c
This patch adds the possibility to supply a set of non-0 file
descriptors for async process communication instead of the
default-created pipe.
Additionally, we now support bi-directional communiction with the
async procedure, by giving the async function both read and write
file descriptors.
To retain compatiblity and similar "API feel" with start_command,
we require start_async callers to set .out = -1 to get a readable
file descriptor. If either of .in or .out is 0, we supply no file
descriptor to the async process.
[sp: Note: Erik started this patch, and a huge bulk of it is
his work. All bugs were introduced later by Shawn.]
Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Like .out, .err may now be set to a file descriptor > 0, which
is a writable pipe/socket/file that the child's stderr will be
redirected into.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/exec-error-report:
Improve error message when a transport helper was not found
start_command: detect execvp failures early
run-command: move wait_or_whine earlier
start_command: report child process setup errors to the parent's stderr
Conflicts:
Makefile
* js/windows:
Do not use date.c:tm_to_time_t() from compat/mingw.c
MSVC: Windows-native implementation for subset of Pthreads API
MSVC: Fix an "incompatible pointer types" compiler warning
Windows: avoid the "dup dance" when spawning a child process
Windows: simplify the pipe(2) implementation
Windows: boost startup by avoiding a static dependency on shell32.dll
Windows: disable Python
When stdin, stdout, or stderr must be redirected for a child process that
on Windows is spawned using one of the spawn() functions of Microsoft's
C runtime, then there is no choice other than to
1. make a backup copy of fd 0,1,2 with dup
2. dup2 the redirection source fd into 0,1,2
3. spawn
4. dup2 the backup back into 0,1,2
5. close the backup copy and the redirection source
We used this idiom as well -- but we are not using the spawn() functions
anymore!
Instead, we have our own implementation. We had hardcoded that stdin,
stdout, and stderr of the child process were inherited from the parent's
fds 0, 1, and 2. But we can actually specify any fd.
With this patch, the fds to inherit are passed from start_command()'s
WIN32 section to our spawn implementation. This way, we can avoid the
backup copies of the fds.
The backup copies were a bug waiting to surface: The OS handles underlying
the dup()ed fds were inherited by the child process (but were not
associated with a file descriptor in the child). Consequently, the file or
pipe represented by the OS handle remained open even after the backup copy
was closed in the parent process until the child exited.
Since our implementation of pipe() creates non-inheritable OS handles, we
still dup() file descriptors in start_command() because dup() happens to
create inheritable duplicates. (A nice side effect is that the fd cleanup
in start_command is the same for Windows and Unix and remains unchanged.)
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, failures during execvp could be detected only by
finish_command. However, in some situations it is beneficial for the
parent process to know earlier that the child process will not run.
The idea to use a pipe to signal failures to the parent process and
the test case were lifted from patches by Ilari Liusvaara.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the child process's environment is set up in start_command(), error
messages were written to wherever the parent redirected the child's stderr
channel. However, even if the parent redirected the child's stderr, errors
during this setup process, including the exec itself, are usually an
indication of a problem in the parent's environment. Therefore, the error
messages should go to the parent's stderr.
Redirection of the child's error messages is usually only used to redirect
hook error messages during client-server exchanges. In these cases, hook
setup errors could be regarded as information leak.
This patch makes a copy of stderr if necessary and uses a special
die routine that is used for all die() calls in the child that sends the
errors messages to the parent's stderr.
The trace call that reported a failed execvp is removed (because it writes
to stderr) and replaced by die_errno() with special treatment of ENOENT.
The improvement in the error message can be seen with this sequence:
mkdir .git/hooks/pre-commit
git commit
Previously, the error message was
error: cannot run .git/hooks/pre-commit: No such file or directory
and now it is
fatal: cannot exec '.git/hooks/pre-commit': Permission denied
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If there are no metacharacters in the program to be run, we
can just skip running the shell entirely and directly exec
the program.
The metacharacter test is pulled verbatim from
launch_editor, which already implements this optimization.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many callsites run "sh -c $CMD" to run $CMD. We can make it
a little simpler for them by factoring out the munging of
argv.
For simple cases with no arguments, this doesn't help much, but:
1. For cases with arguments, we save the caller from
having to build the appropriate shell snippet.
2. We can later optimize to avoid the shell when
there are no metacharacters in the program.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code which is conditional on MinGW32 is actually conditional on Windows.
Use the WIN32 symbol, which is defined by the MINGW32 and MSVC environments,
but not by Cygwin.
Define SNPRINTF_SIZE_CORR=1 for MSVC too, as its vsnprintf function does
not add NUL at the end of the buffer if the result fits the buffer size
exactly.
Signed-off-by: Frank Li <lznuaa@gmail.com>
Signed-off-by: Marius Storm-Olsen <mstormo@gmail.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
MSVC requires __stdcall to be between the functions return value and the
function name, and that the function pointer type is in the form of
return_type (WINAPI *function_name)(arguments...)
Signed-off-by: Frank Li <lznuaa@gmail.com>
Signed-off-by: Marius Storm-Olsen <mstormo@gmail.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
MSVC does not understand this C99 style.
Signed-off-by: Frank Li <lznuaa@gmail.com>
Signed-off-by: Marius Storm-Olsen <mstormo@gmail.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, it would not be possible to call start_command twice for the
same struct child_process that has env set.
The fix is achieved by moving the loop that modifies the environment block
into a helper function. This also allows us to make two other helper
functions static.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/run-command-updates:
api-run-command.txt: describe error behavior of run_command functions
run-command.c: squelch a "use before assignment" warning
receive-pack: remove unnecessary run_status report
run_command: report failure to execute the program, but optionally don't
run_command: encode deadly signal number in the return value
run_command: report system call errors instead of returning error codes
run_command: return exit code as positive value
MinGW: simplify waitpid() emulation macros
i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490) compiler
(and probably others) mistakenly thinks variable failed_errno is used
before assigned. Work it around by giving it a fake initialization.
Signed-off-by: David Soria Parra <dsp@php.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the case where a program was not found, it was still the task of the
caller to report an error to the user. Usually, this is an interesting case
but only few callers actually reported a specific error (though many call
sites report a generic error message regardless of the cause).
With this change the error is reported by run_command, but since there is
one call site in git.c that does not want that, an option is added to
struct child_process, which is used to turn the error off.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We now write the signal number in the error message if the program
terminated by a signal. The negative return value is constructed such that
after truncation to 8 bits it looks like a POSIX shell's $?:
$ echo 0000 | { git upload-pack .; echo $? >&2; } | :
error: git-upload-pack died of signal 13
141
Previously, the exit code was 255 instead of 141.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The motivation for this change is that system call failures are serious
errors that should be reported to the user, but only few callers took the
burden to decode the error codes that the functions returned into error
messages.
If at all, then only an unspecific error message was given. A prominent
example is this:
$ git upload-pack . | :
fatal: unable to run 'git-upload-pack'
In this example, git-upload-pack, the external command invoked through the
git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to
report the real cause. In fact, this very error message is copied to the
syslog if git-daemon's client aborts the connection early.
With this change, system call failures are reported immediately after the
failure and only a generic failure code is returned to the caller. In the
above example the error is now to the point:
$ git upload-pack . | :
error: git-upload-pack died of signal
Note that there is no error report if the invoked program terminated with
a non-zero exit code, because it is reasonable to expect that the invoked
program has already reported an error. (But many run_command call sites
nevertheless write a generic error message.)
There was one special return code that was used to identify the case where
run_command failed because the requested program could not be exec'd. This
special case is now treated like a system call failure with errno set to
ENOENT. No error is reported in this case, because the call site in git.c
expects this as a normal result. Therefore, the callers that carefully
decoded the return value still check for this condition.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As a general guideline, functions in git's code return zero to indicate
success and negative values to indicate failure. The run_command family of
functions followed this guideline. But there are actually two different
kinds of failure:
- failures of system calls;
- non-zero exit code of the program that was run.
Usually, a non-zero exit code of the program is a failure and means a
failure to the caller. Except that sometimes it does not. For example, the
exit code of merge programs (e.g. external merge drivers) conveys
information about how the merge failed, and not all exit calls are
actually failures.
Furthermore, the return value of run_command is sometimes used as exit
code by the caller.
This change arranges that the exit code of the program is returned as a
positive value, which can now be regarded as the "result" of the function.
System call failures continue to be reported as negative values.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change calls to die(..., strerror(errno)) to use the new die_errno().
In the process, also make slight style adjustments: at least state
_something_ about the function that failed (instead of just printing
the pathname), and put paths in single quotes.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Essentially; s/type* /type */ as per the coding guidelines.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When run_command() was asked to run a non-existant command, its behavior
varied depending on the platform:
- on POSIX systems, we would fork, and then after the execvp call
failed, we could call die(), which prints a message to stderr and
exits with code 128.
- on Windows, we do a PATH lookup, realize the program isn't there, and
then return ERR_RUN_COMMAND_FORK
The goal of this patch is to make it clear to callers that the specific
error was a missing command. To do this, we will return the error code
ERR_RUN_COMMAND_EXEC, which is already defined in run-command.h, checked
for in several places, but never actually gets set.
The new behavior is:
- on POSIX systems, we exit the forked process with code 127 (the same
as the shell uses to report missing commands). The parent process
recognizes this code and returns an EXEC error. The stderr message is
silenced, since the caller may be speculatively trying to run a
command. Instead, we use trace_printf so that somebody interested in
debugging can see the error that occured.
- on Windows, we check errno, which is already set correctly by
mingw_spawnvpe, and report an EXEC error instead of a FORK error
Thus it is safe to speculatively run a command:
int r = run_command_v_opt(argv, 0);
if (r == -ERR_RUN_COMMAND_EXEC)
/* oops, it wasn't found; try something else */
else
/* we failed for some other reason, error is in r */
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A function that runs a hook is used in several Git commands.
builtin-commit.c has the one that is most general for cases without
piping. The one in builtin-gc.c prints some useful warnings.
This patch moves a merged version of these variants into libgit and
lets the other builtins use this libified run_hook().
The run_hook() function used in receive-pack.c feeds the standard
input of the pre-receive or post-receive hooks. This function is
renamed to run_receive_hook() because the libified run_hook() cannot
handle this.
Mentored-by: Daniel Barkalow <barkalow@iabervon.org>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Stephan Beyer <s-beyer@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function is not used anywhere.
Johannes Sixt <johannes.sixt@telecom.at>:
> Future callers can use run_command_v_opt_cd_env() instead.
Signed-off-by: Nanako Shiraishi <nanako3@lavabit.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This adds fflush(NULL) before fork() in start_command(), to keep
the generic interface safe.
A remaining use of fork() with no flushing is in a comment in
show_tree(). Rewrite that comment to use start_command().
Signed-off-by: Anders Melchiorsen <mail@cup.kalibalik.dk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This prevents double output in case stdout is redirected.
Signed-off-by: Anders Melchiorsen <mail@cup.kalibalik.dk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We prefer running the dashless form, and POSIX side already does so; we
should use it in MinGW's start_command(), too.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Acked-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a function provided by the caller which is called
_after_ the process is forked, but before the spawned
program is executed. On platforms (like mingw) where
subprocesses are forked and executed in a single call, the
preexec callback is simply ignored.
This will be used in the following patch to do some setup
for 'less' that must happen in the forked child.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When GIT_TRACE is set, the user is most likely wanting to see an external
command that is about to be executed.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The problem with Windows's own implementation is that it tries to be
clever when a console program is invoked from a GUI application: In this
case it sometimes automatically allocates a new console window. As a
consequence, the IO channels of the spawned program are directed to the
console, but the invoking application listens on channels that are now
directed to nowhere.
In this implementation we use the lowlevel facilities of CreateProcess(),
which offers a flag to tell the system not to open a console. As a side
effect, only stdin, stdout, and stderr channels will be accessible from
C programs that are spawned. Other channels (file handles, pipe handles,
etc.) are still inherited by the spawned program, but it doesn't get
enough information to access them.
Johannes Schindelin integrated path quoting and unified the various
*execv* and *spawnv* helpers. Eric Raible suggested to also quote '{'.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
In upload-pack we must explicitly close the output channel of rev-list.
(On Unix, the channel is closed automatically because process that runs
rev-list terminates.)
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
On Windows, we have spawnv() variants to run a child process instead of
fork()/exec(). In order to attach pipe ends to stdin, stdout, and stderr,
we have to use this idiom:
save1 = dup(1);
dup2(pipe[1], 1);
spawnv();
dup2(save1, 1);
close(pipe[1]);
assuming that the descriptors created by pipe() are not inheritable.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
With this patch, in the 'start_command' function after forking
we now take care of stderr in the child process before stdout.
This way if 'start_command' is called with a 'child_process'
argument like this:
.err = -1;
.stdout_to_stderr = 1;
then stderr will be redirected to a pipe before stdout is
redirected to stderr. So we can now get the process' stdout
from the pipe (as well as its stderr).
Earlier such a call would have redirected stdout to stderr
before stderr was itself redirected, and therefore stdout
would not have followed stderr, which would not have been
very useful anyway.
Update documentation in 'api-run-command.txt' accordingly.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Acked-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Callers of start_command() can set the members .in and .out of struct
child_process to a value > 0 to specify that this descriptor is used as
the stdin or stdout of the child process.
Previously, if start_command() was successful, this descriptor was closed
upon return. Here we now make sure that the descriptor is also closed in
case of failures. All callers are updated not to close the file descriptor
themselves after start_command() was called.
Note that earlier run_gpg_verify() of git-verify-tag set .out = 1, which
worked because start_command() treated this as a special case, but now
this is incorrect because it closes the descriptor. The intent here is to
inherit stdout to the child, which is achieved by .out = 0.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By setting .in, .out, or .err members of struct child_process to -1, the
callers of start_command() can request that a pipe is allocated that talks
to the child process and one end is returned by replacing -1 with the
file descriptor.
Previously, a flag was set (for .in and .out, but not .err) to signal
finish_command() to close the pipe end that start_command() had handed out,
so it was optional for callers to close the pipe, and many already do so.
Now we make it mandatory to close the pipe.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some callers may wish to redirect stderr to /dev/null in some
contexts, such as if they are executing a command only to get
the exit status and don't want users to see whatever output it
may produce as a side-effect of computing that exit status.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This adds start_async() and finish_async(), which runs a function
asynchronously. Communication with the caller happens only via pipes.
For this reason, this implementation forks off a child process that runs
the function.
[sp: Style nit fixed by removing unnecessary block on if condition
inside of start_async()]
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This adds another stanza that allocates a pipe that is connected to the
child's stderr and that the caller can read from. In order to request this
pipe, the caller sets cmd->err to -1.
The implementation is not exactly modeled after the stdout case: For stdout
the caller can supply an existing file descriptor, but this facility is
nowhere needed in the stderr case. Additionally, the caller is required to
close cmd->err.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
To unset a variable, just specify its name, without "=". For example:
const char *env[] = {"GIT_DIR=.git", "PWD", NULL};
const char *argv[] = {"git-ls-files", "-s", NULL};
int err = run_command_v_opt_cd_env(argv, RUN_GIT_CMD, ".", env);
The PWD will be unset before executing git-ls-files.
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
There is no way to specify and override for the environment:
there'd be no user for it yet.
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
It can make code simplier (no need to preserve cwd) and safer
(no chance the cwd of the current process is accidentally forgotten).
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Some run-command callers may wish to just discard any data that
is sent to stdout from the child. This is a lot like our existing
no_stdin support, we just open /dev/null and duplicate the descriptor
into position.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Some potential callers of the run_command family of functions need
to control not only the stdin redirection of the child, but also
the stdout redirection of the child. This can now be setup much
like the already existing stdin redirection.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
I started hacking on a change to add stdout redirection support to
the run_command family, but found I was using a lot of close calls
on two pipes in an array (such as for pipe). So I'm doing a tiny
bit of refactoring first to make the next set of changes clearer.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Sometimes callers trying to use run_command to execute a child
process will want to setup a pipe or file descriptor to redirect
into the child's stdin.
This idea is completely stolen from builtin-bundle's fork_with_pipe,
written by Johannes Schindelin. All credit (and blame) should lie
with Dscho. ;-)
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If the calling process wants to send data to stdin of a
child process it will need to arrange for a pipe and get
the child process running, feed data to it, then wait
for the child process to finish. So we split the run
function into two halves, allowing callers to first
start the child then later finish it.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
There are a number of places where we do some variation of
fork()+exec() but we also need to setup redirection in the process,
much like what run_command does for us already with its option flags.
It would be nice to reuse more of the run_command logic, especially
as that non-fork API helps us to port to odd platforms like Win32.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We don't actually use these va_list based variants of run_command
anymore. I'm removing them before I make further improvements.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently the update hook invoked by receive-pack has its stdin
connected to the pushing client. The hook shouldn't attempt to
read from this stream, and doing so may consume data that was
meant for receive-pack. Instead we should give the update hook
/dev/null as its stdin, ensuring that it always receives EOF and
doesn't disrupt the protocol if it attempts to read any data.
The post-update hook is similar, as it gets invoked with /dev/null
on stdin to prevent the hook from reading data from the client.
Previously we had invoked it with stdout also connected to /dev/null,
throwing away anything on stdout, to prevent client protocol errors.
Instead we should redirect stdout to stderr, like we do with the
update hook.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If an update hook outputs to stdout then that output will be sent
back over the wire to the push client as though it were part of
the git protocol. This tends to cause protocol errors on the
client end of the connection, as the hook output is not expected
in that context. Most hook developers work around this by making
sure their hook outputs everything to stderr.
But hooks shouldn't need to perform such special behavior. Instead
we can just dup stderr to stdout prior to invoking the update hook.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The argc parameter is never used by the run_command_v family of
functions. Instead they require that the passed argv[] be NULL
terminated so they can rely on the operating system's execvp
function to correctly pass the arguments to the new process.
Making the caller pass the argc is just confusing, as the caller
could be mislead into believing that the argc might take precendece
over the argv, or that the argv does not need to be NULL terminated.
So goodbye argc. Don't come back.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a mechanical clean-up of the way *.c files include
system header files.
(1) sources under compat/, platform sha-1 implementations, and
xdelta code are exempt from the following rules;
(2) the first #include must be "git-compat-util.h" or one of
our own header file that includes it first (e.g. config.h,
builtin.h, pkt-line.h);
(3) system headers that are included in "git-compat-util.h"
need not be included in individual C source files.
(4) "git-compat-util.h" does not have to include subsystem
specific header files (e.g. expat.h).
Signed-off-by: Junio C Hamano <junkio@cox.net>
Mark Wooding noticed there was a type mismatch warning in git.c; this
patch does things slightly differently (mostly tightening const) and
was what I was holding onto, waiting for the setup-revisions change
to be merged into the master branch.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The git suite may not be in PATH (and thus programs such as
git-send-pack could not exec git-rev-list). Thus there is a need for
logic that will locate these programs. Modifying PATH is not
desirable as it result in behavior differing from the user's
intentions, as we may end up prepending "/usr/bin" to PATH.
- git C programs will use exec*_git_cmd() APIs to exec sub-commands.
- exec*_git_cmd() will execute a git program by searching for it in
the following directories:
1. --exec-path (as used by "git")
2. The GIT_EXEC_PATH environment variable.
3. $(gitexecdir) as set in Makefile (default value $(bindir)).
- git wrapper will modify PATH as before to enable shell scripts to
invoke "git-foo" commands.
Ideally, shell scripts should use the git wrapper to become independent
of PATH, and then modifying PATH will not be necessary.
[jc: with minor updates after a brief review.]
Signed-off-by: Michal Ostrowski <mostrows@watson.ibm.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When calling post-update hook, don't leave stdin and stdout connected to
the pushing connection.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The earlier one conflated update and post-update hooks for no
good reason. Correct that ugly hack. Now post-update hooks
will take the list of successfully updated refs.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Just before updating a ref,
$GIT_DIR/hooks/update refname old-sha1 new-sha1
is called if executable. The hook can decline the ref to be
updated by exiting with a non-zero status, or allow it to be
updated by exiting with a zero status. The mechanism also
allows e.g sending of a mail with pushed commits on the remote
repository.
Documentation update with an example hook is included.
jc: The credits of the basic idea and initial implementation go
to Josef, but I ended up rewriting major parts of his patch, so
bugs are all mine. Also I changed the semantics for the hook
from his original version (which were post-update hook) so that
the hook can optionally decline to update the ref, and also can
be used to implement the overall cleanups. The latter was
primarily to implement a suggestion from Linus that calling
update-server-info should be made optional.
Signed-off-by: Junio C Hamano <junkio@cox.net>