2006-02-16 17:24:16 +08:00
|
|
|
#!/usr/bin/env perl
|
2006-02-21 02:57:29 +08:00
|
|
|
# Copyright (C) 2006, Eric Wong <normalperson@yhbt.net>
|
|
|
|
# License: GPL v2 or later
|
2006-02-16 17:24:16 +08:00
|
|
|
use warnings;
|
|
|
|
use strict;
|
|
|
|
use vars qw/ $AUTHOR $VERSION
|
2006-03-03 17:20:09 +08:00
|
|
|
$SVN_URL $SVN_INFO $SVN_WC $SVN_UUID
|
2006-02-16 17:24:16 +08:00
|
|
|
$GIT_SVN_INDEX $GIT_SVN
|
2006-06-13 19:02:23 +08:00
|
|
|
$GIT_DIR $GIT_SVN_DIR $REVDB/;
|
2006-02-16 17:24:16 +08:00
|
|
|
$AUTHOR = 'Eric Wong <normalperson@yhbt.net>';
|
2006-07-06 15:14:16 +08:00
|
|
|
$VERSION = '@@GIT_VERSION@@';
|
2006-03-30 14:37:18 +08:00
|
|
|
|
|
|
|
use Cwd qw/abs_path/;
|
|
|
|
$GIT_DIR = abs_path($ENV{GIT_DIR} || '.git');
|
|
|
|
$ENV{GIT_DIR} = $GIT_DIR;
|
|
|
|
|
2006-06-03 06:16:41 +08:00
|
|
|
my $LC_ALL = $ENV{LC_ALL};
|
2006-06-01 17:35:44 +08:00
|
|
|
my $TZ = $ENV{TZ};
|
2006-02-16 17:24:16 +08:00
|
|
|
# make sure the svn binary gives consistent output between locales and TZs:
|
|
|
|
$ENV{TZ} = 'UTC';
|
|
|
|
$ENV{LC_ALL} = 'C';
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
$| = 1; # unbuffer STDOUT
|
2006-02-16 17:24:16 +08:00
|
|
|
|
|
|
|
# If SVN:: library support is added, please make the dependencies
|
|
|
|
# optional and preserve the capability to use the command-line client.
|
2006-02-21 02:57:28 +08:00
|
|
|
# use eval { require SVN::... } to make it lazy load
|
2006-03-03 17:20:08 +08:00
|
|
|
# We don't use any modules not in the standard Perl distribution:
|
2006-02-16 17:24:16 +08:00
|
|
|
use Carp qw/croak/;
|
|
|
|
use IO::File qw//;
|
|
|
|
use File::Basename qw/dirname basename/;
|
|
|
|
use File::Path qw/mkpath/;
|
2006-06-01 17:35:44 +08:00
|
|
|
use Getopt::Long qw/:config gnu_getopt no_ignore_case auto_abbrev pass_through/;
|
2006-02-16 17:24:16 +08:00
|
|
|
use File::Spec qw//;
|
2006-08-11 19:34:07 +08:00
|
|
|
use File::Copy qw/copy/;
|
2006-02-26 18:22:27 +08:00
|
|
|
use POSIX qw/strftime/;
|
2006-06-16 04:36:12 +08:00
|
|
|
use IPC::Open3;
|
2006-06-13 19:02:23 +08:00
|
|
|
use Memoize;
|
|
|
|
memoize('revisions_eq');
|
2006-06-28 10:39:11 +08:00
|
|
|
memoize('cmt_metadata');
|
|
|
|
memoize('get_commit_time');
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
|
2006-11-24 17:38:04 +08:00
|
|
|
my ($SVN_PATH, $SVN, $SVN_LOG, $_use_lib, $AUTH_BATON, $AUTH_CALLBACKS);
|
2006-10-12 09:19:55 +08:00
|
|
|
|
|
|
|
sub nag_lib {
|
|
|
|
print STDERR <<EOF;
|
|
|
|
! Please consider installing the SVN Perl libraries (version 1.1.0 or
|
|
|
|
! newer). You will generally get better performance and fewer bugs,
|
|
|
|
! especially if you:
|
|
|
|
! 1) have a case-insensitive filesystem
|
|
|
|
! 2) replace symlinks with files (and vice-versa) in commits
|
|
|
|
|
|
|
|
EOF
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$_use_lib = 1 unless $ENV{GIT_SVN_NO_LIB};
|
|
|
|
libsvn_load();
|
2006-10-12 09:19:55 +08:00
|
|
|
nag_lib() unless $_use_lib;
|
|
|
|
|
2006-06-13 19:02:23 +08:00
|
|
|
my $_optimize_commits = 1 unless $ENV{GIT_SVN_NO_OPTIMIZE_COMMITS};
|
2006-02-16 17:24:16 +08:00
|
|
|
my $sha1 = qr/[a-f\d]{40}/;
|
2006-03-03 17:20:07 +08:00
|
|
|
my $sha1_short = qr/[a-f\d]{4,40}/;
|
2006-02-21 02:57:26 +08:00
|
|
|
my ($_revision,$_stdin,$_no_ignore_ext,$_no_stop_copy,$_help,$_rmdir,$_edit,
|
2006-06-16 17:55:13 +08:00
|
|
|
$_find_copies_harder, $_l, $_cp_similarity, $_cp_remote,
|
2006-06-28 10:39:14 +08:00
|
|
|
$_repack, $_repack_nr, $_repack_flags, $_q,
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
$_message, $_file, $_follow_parent, $_no_metadata,
|
2006-06-13 06:53:13 +08:00
|
|
|
$_template, $_shared, $_no_default_regex, $_no_graft_copy,
|
2006-06-01 17:35:44 +08:00
|
|
|
$_limit, $_verbose, $_incremental, $_oneline, $_l_fmt, $_show_commit,
|
2006-08-26 15:01:23 +08:00
|
|
|
$_version, $_upgrade, $_authors, $_branch_all_refs, @_opt_m,
|
2006-11-24 17:38:04 +08:00
|
|
|
$_merge, $_strategy, $_dry_run, $_ignore_nodate, $_non_recursive,
|
|
|
|
$_username, $_config_dir, $_no_auth_cache);
|
2006-06-13 19:02:23 +08:00
|
|
|
my (@_branch_from, %tree_map, %users, %rusers, %equiv);
|
2006-05-29 06:23:56 +08:00
|
|
|
my ($_svn_co_url_revs, $_svn_pg_peg_revs);
|
2006-05-24 16:22:07 +08:00
|
|
|
my @repo_path_split_cache;
|
2006-02-16 17:24:16 +08:00
|
|
|
|
2006-03-03 17:20:08 +08:00
|
|
|
my %fc_opts = ( 'no-ignore-externals' => \$_no_ignore_ext,
|
2006-03-03 17:20:07 +08:00
|
|
|
'branch|b=s' => \@_branch_from,
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
'follow-parent|follow' => \$_follow_parent,
|
2006-04-28 18:42:38 +08:00
|
|
|
'branch-all-refs|B' => \$_branch_all_refs,
|
2006-05-24 17:07:32 +08:00
|
|
|
'authors-file|A=s' => \$_authors,
|
|
|
|
'repack:i' => \$_repack,
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
'no-metadata' => \$_no_metadata,
|
2006-06-28 10:39:14 +08:00
|
|
|
'quiet|q' => \$_q,
|
2006-11-24 17:38:04 +08:00
|
|
|
'username=s' => \$_username,
|
|
|
|
'config-dir=s' => \$_config_dir,
|
|
|
|
'no-auth-cache' => \$_no_auth_cache,
|
2006-09-25 10:50:15 +08:00
|
|
|
'ignore-nodate' => \$_ignore_nodate,
|
2006-05-24 17:07:32 +08:00
|
|
|
'repack-flags|repack-args|repack-opts=s' => \$_repack_flags);
|
2006-05-24 10:23:41 +08:00
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
my ($_trunk, $_tags, $_branches);
|
|
|
|
my %multi_opts = ( 'trunk|T=s' => \$_trunk,
|
|
|
|
'tags|t=s' => \$_tags,
|
|
|
|
'branches|b=s' => \$_branches );
|
|
|
|
my %init_opts = ( 'template=s' => \$_template, 'shared' => \$_shared );
|
2006-06-28 10:39:12 +08:00
|
|
|
my %cmt_opts = ( 'edit|e' => \$_edit,
|
|
|
|
'rmdir' => \$_rmdir,
|
|
|
|
'find-copies-harder' => \$_find_copies_harder,
|
|
|
|
'l=i' => \$_l,
|
|
|
|
'copy-similarity|C=i'=> \$_cp_similarity
|
|
|
|
);
|
2006-06-13 06:53:13 +08:00
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
my %cmd = (
|
2006-03-03 17:20:08 +08:00
|
|
|
fetch => [ \&fetch, "Download new revisions from SVN",
|
|
|
|
{ 'revision|r=s' => \$_revision, %fc_opts } ],
|
2006-05-06 03:35:39 +08:00
|
|
|
init => [ \&init, "Initialize a repo for tracking" .
|
2006-06-01 06:49:56 +08:00
|
|
|
" (requires URL argument)",
|
2006-06-13 06:53:13 +08:00
|
|
|
\%init_opts ],
|
2006-03-03 17:20:08 +08:00
|
|
|
commit => [ \&commit, "Commit git revisions to SVN",
|
2006-06-28 10:39:12 +08:00
|
|
|
{ 'stdin|' => \$_stdin, %cmt_opts, %fc_opts, } ],
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
'show-ignore' => [ \&show_ignore, "Show svn:ignore listings",
|
|
|
|
{ 'revision|r=i' => \$_revision } ],
|
2006-03-03 17:20:08 +08:00
|
|
|
rebuild => [ \&rebuild, "Rebuild git-svn metadata (after git clone)",
|
|
|
|
{ 'no-ignore-externals' => \$_no_ignore_ext,
|
2006-06-16 17:55:13 +08:00
|
|
|
'copy-remote|remote=s' => \$_cp_remote,
|
2006-03-03 17:20:08 +08:00
|
|
|
'upgrade' => \$_upgrade } ],
|
2006-06-13 06:53:13 +08:00
|
|
|
'graft-branches' => [ \&graft_branches,
|
|
|
|
'Detect merges/branches from already imported history',
|
|
|
|
{ 'merge-rx|m' => \@_opt_m,
|
2006-06-28 10:39:11 +08:00
|
|
|
'branch|b=s' => \@_branch_from,
|
|
|
|
'branch-all-refs|B' => \$_branch_all_refs,
|
2006-06-13 06:53:13 +08:00
|
|
|
'no-default-regex' => \$_no_default_regex,
|
|
|
|
'no-graft-copy' => \$_no_graft_copy } ],
|
|
|
|
'multi-init' => [ \&multi_init,
|
|
|
|
'Initialize multiple trees (like git-svnimport)',
|
|
|
|
{ %multi_opts, %fc_opts } ],
|
|
|
|
'multi-fetch' => [ \&multi_fetch,
|
|
|
|
'Fetch multiple trees (like git-svnimport)',
|
|
|
|
\%fc_opts ],
|
2006-06-01 17:35:44 +08:00
|
|
|
'log' => [ \&show_log, 'Show commit logs',
|
|
|
|
{ 'limit=i' => \$_limit,
|
|
|
|
'revision|r=s' => \$_revision,
|
|
|
|
'verbose|v' => \$_verbose,
|
|
|
|
'incremental' => \$_incremental,
|
|
|
|
'oneline' => \$_oneline,
|
|
|
|
'show-commit' => \$_show_commit,
|
2006-10-12 02:53:22 +08:00
|
|
|
'non-recursive' => \$_non_recursive,
|
2006-06-01 17:35:44 +08:00
|
|
|
'authors-file|A=s' => \$_authors,
|
|
|
|
} ],
|
2006-06-28 10:39:12 +08:00
|
|
|
'commit-diff' => [ \&commit_diff, 'Commit a diff between two trees',
|
|
|
|
{ 'message|m=s' => \$_message,
|
|
|
|
'file|F=s' => \$_file,
|
2006-11-09 17:19:37 +08:00
|
|
|
'revision|r=s' => \$_revision,
|
2006-06-28 10:39:12 +08:00
|
|
|
%cmt_opts } ],
|
2006-08-26 15:01:23 +08:00
|
|
|
dcommit => [ \&dcommit, 'Commit several diffs to merge with upstream',
|
|
|
|
{ 'merge|m|M' => \$_merge,
|
|
|
|
'strategy|s=s' => \$_strategy,
|
|
|
|
'dry-run|n' => \$_dry_run,
|
|
|
|
%cmt_opts } ],
|
2006-02-16 17:24:16 +08:00
|
|
|
);
|
2006-06-13 06:53:13 +08:00
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
my $cmd;
|
|
|
|
for (my $i = 0; $i < @ARGV; $i++) {
|
|
|
|
if (defined $cmd{$ARGV[$i]}) {
|
|
|
|
$cmd = $ARGV[$i];
|
|
|
|
splice @ARGV, $i, 1;
|
|
|
|
last;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2006-03-03 17:20:09 +08:00
|
|
|
my %opts = %{$cmd{$cmd}->[2]} if (defined $cmd);
|
2006-03-03 17:20:08 +08:00
|
|
|
|
2006-05-24 16:40:37 +08:00
|
|
|
read_repo_config(\%opts);
|
2006-06-01 17:35:44 +08:00
|
|
|
my $rv = GetOptions(%opts, 'help|H|h' => \$_help,
|
|
|
|
'version|V' => \$_version,
|
|
|
|
'id|i=s' => \$GIT_SVN);
|
|
|
|
exit 1 if (!$rv && $cmd ne 'log');
|
2006-03-03 17:20:09 +08:00
|
|
|
|
2006-05-24 17:07:32 +08:00
|
|
|
set_default_vals();
|
2006-02-16 17:24:16 +08:00
|
|
|
usage(0) if $_help;
|
2006-02-21 02:57:29 +08:00
|
|
|
version() if $_version;
|
2006-03-03 17:20:08 +08:00
|
|
|
usage(1) unless defined $cmd;
|
2006-05-24 16:40:37 +08:00
|
|
|
init_vars();
|
2006-03-03 17:20:08 +08:00
|
|
|
load_authors() if $_authors;
|
2006-04-28 18:42:38 +08:00
|
|
|
load_all_refs() if $_branch_all_refs;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
svn_compat_check() unless $_use_lib;
|
2006-07-15 22:10:56 +08:00
|
|
|
migration_check() unless $cmd =~ /^(?:init|rebuild|multi-init|commit-diff)$/;
|
2006-02-16 17:24:16 +08:00
|
|
|
$cmd{$cmd}->[0]->(@ARGV);
|
|
|
|
exit 0;
|
|
|
|
|
|
|
|
####################### primary functions ######################
|
|
|
|
sub usage {
|
|
|
|
my $exit = shift || 0;
|
|
|
|
my $fd = $exit ? \*STDERR : \*STDOUT;
|
|
|
|
print $fd <<"";
|
|
|
|
git-svn - bidirectional operations between a single Subversion tree and git
|
|
|
|
Usage: $0 <command> [options] [arguments]\n
|
2006-03-03 17:20:09 +08:00
|
|
|
|
|
|
|
print $fd "Available commands:\n" unless $cmd;
|
2006-02-16 17:24:16 +08:00
|
|
|
|
|
|
|
foreach (sort keys %cmd) {
|
2006-03-03 17:20:09 +08:00
|
|
|
next if $cmd && $cmd ne $_;
|
2006-10-12 05:53:36 +08:00
|
|
|
print $fd ' ',pack('A17',$_),$cmd{$_}->[1],"\n";
|
2006-03-03 17:20:09 +08:00
|
|
|
foreach (keys %{$cmd{$_}->[2]}) {
|
|
|
|
# prints out arguments as they should be passed:
|
2006-05-24 16:40:37 +08:00
|
|
|
my $x = s#[:=]s$## ? '<arg>' : s#[:=]i$## ? '<num>' : '';
|
2006-10-12 05:53:36 +08:00
|
|
|
print $fd ' ' x 21, join(', ', map { length $_ > 1 ?
|
2006-03-03 17:20:09 +08:00
|
|
|
"--$_" : "-$_" }
|
|
|
|
split /\|/,$_)," $x\n";
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
print $fd <<"";
|
2006-03-03 17:20:09 +08:00
|
|
|
\nGIT_SVN_ID may be set in the environment or via the --id/-i switch to an
|
|
|
|
arbitrary identifier if you're tracking multiple SVN branches/repositories in
|
|
|
|
one git repository and want to keep them separate. See git-svn(1) for more
|
|
|
|
information.
|
2006-02-16 17:24:16 +08:00
|
|
|
|
|
|
|
exit $exit;
|
|
|
|
}
|
|
|
|
|
2006-02-21 02:57:29 +08:00
|
|
|
sub version {
|
|
|
|
print "git-svn version $VERSION\n";
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
sub rebuild {
|
2006-06-16 17:55:13 +08:00
|
|
|
if (quiet_run(qw/git-rev-parse --verify/,"refs/remotes/$GIT_SVN^0")) {
|
|
|
|
copy_remote_ref();
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
$SVN_URL = shift or undef;
|
|
|
|
my $newest_rev = 0;
|
2006-03-02 13:58:31 +08:00
|
|
|
if ($_upgrade) {
|
|
|
|
sys('git-update-ref',"refs/remotes/$GIT_SVN","$GIT_SVN-HEAD");
|
|
|
|
} else {
|
|
|
|
check_upgrade_needed();
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
|
|
|
|
my $pid = open(my $rev_list,'-|');
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if ($pid == 0) {
|
2006-03-02 13:58:31 +08:00
|
|
|
exec("git-rev-list","refs/remotes/$GIT_SVN") or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
2006-03-02 13:58:31 +08:00
|
|
|
my $latest;
|
2006-02-16 17:24:16 +08:00
|
|
|
while (<$rev_list>) {
|
|
|
|
chomp;
|
|
|
|
my $c = $_;
|
|
|
|
croak "Non-SHA1: $c\n" unless $c =~ /^$sha1$/o;
|
|
|
|
my @commit = grep(/^git-svn-id: /,`git-cat-file commit $c`);
|
|
|
|
next if (!@commit); # skip merges
|
2006-06-01 17:35:44 +08:00
|
|
|
my ($url, $rev, $uuid) = extract_metadata($commit[$#commit]);
|
2006-11-24 06:54:04 +08:00
|
|
|
if (!defined $rev || !$uuid) {
|
2006-06-01 17:35:44 +08:00
|
|
|
croak "Unable to extract revision or UUID from ",
|
|
|
|
"$c, $commit[$#commit]\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
2006-03-02 13:58:31 +08:00
|
|
|
|
|
|
|
# if we merged or otherwise started elsewhere, this is
|
|
|
|
# how we break out of it
|
2006-03-03 17:20:09 +08:00
|
|
|
next if (defined $SVN_UUID && ($uuid ne $SVN_UUID));
|
2006-03-09 19:52:48 +08:00
|
|
|
next if (defined $SVN_URL && defined $url && ($url ne $SVN_URL));
|
2006-03-02 13:58:31 +08:00
|
|
|
|
|
|
|
unless (defined $latest) {
|
2006-02-16 17:24:16 +08:00
|
|
|
if (!$SVN_URL && !$url) {
|
|
|
|
croak "SVN repository location required: $url\n";
|
|
|
|
}
|
|
|
|
$SVN_URL ||= $url;
|
2006-03-09 19:48:47 +08:00
|
|
|
$SVN_UUID ||= $uuid;
|
|
|
|
setup_git_svn();
|
2006-03-02 13:58:31 +08:00
|
|
|
$latest = $rev;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
2006-06-13 19:02:23 +08:00
|
|
|
revdb_set($REVDB, $rev, $c);
|
|
|
|
print "r$rev = $c\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
$newest_rev = $rev if ($rev > $newest_rev);
|
|
|
|
}
|
|
|
|
close $rev_list or croak $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
|
|
|
|
goto out if $_use_lib;
|
2006-02-16 17:24:16 +08:00
|
|
|
if (!chdir $SVN_WC) {
|
2006-03-09 19:48:47 +08:00
|
|
|
svn_cmd_checkout($SVN_URL, $latest, $SVN_WC);
|
2006-02-16 17:24:16 +08:00
|
|
|
chdir $SVN_WC or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
$pid = fork;
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if ($pid == 0) {
|
|
|
|
my @svn_up = qw(svn up);
|
|
|
|
push @svn_up, '--ignore-externals' unless $_no_ignore_ext;
|
|
|
|
sys(@svn_up,"-r$newest_rev");
|
|
|
|
$ENV{GIT_INDEX_FILE} = $GIT_SVN_INDEX;
|
2006-05-24 10:23:41 +08:00
|
|
|
index_changes();
|
2006-05-24 16:40:37 +08:00
|
|
|
exec('git-write-tree') or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
2006-05-24 16:40:37 +08:00
|
|
|
croak $? if $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
out:
|
2006-03-02 13:58:31 +08:00
|
|
|
if ($_upgrade) {
|
|
|
|
print STDERR <<"";
|
|
|
|
Keeping deprecated refs/head/$GIT_SVN-HEAD for now. Please remove it
|
|
|
|
when you have upgraded your tools and habits to use refs/remotes/$GIT_SVN
|
|
|
|
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub init {
|
2006-07-01 12:42:53 +08:00
|
|
|
my $url = shift or die "SVN repository location required " .
|
2006-05-06 03:35:39 +08:00
|
|
|
"as a command-line argument\n";
|
2006-07-01 12:42:53 +08:00
|
|
|
$url =~ s!/+$!!; # strip trailing slash
|
|
|
|
|
|
|
|
if (my $repo_path = shift) {
|
|
|
|
unless (-d $repo_path) {
|
|
|
|
mkpath([$repo_path]);
|
|
|
|
}
|
|
|
|
$GIT_DIR = $ENV{GIT_DIR} = $repo_path . "/.git";
|
|
|
|
init_vars();
|
|
|
|
}
|
|
|
|
|
|
|
|
$SVN_URL = $url;
|
2006-02-16 17:24:16 +08:00
|
|
|
unless (-d $GIT_DIR) {
|
2006-06-01 06:49:56 +08:00
|
|
|
my @init_db = ('git-init-db');
|
|
|
|
push @init_db, "--template=$_template" if defined $_template;
|
|
|
|
push @init_db, "--shared" if defined $_shared;
|
|
|
|
sys(@init_db);
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
setup_git_svn();
|
|
|
|
}
|
|
|
|
|
|
|
|
sub fetch {
|
2006-03-02 13:58:31 +08:00
|
|
|
check_upgrade_needed();
|
2006-05-24 16:22:07 +08:00
|
|
|
$SVN_URL ||= file_to_s("$GIT_SVN_DIR/info/url");
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $ret = $_use_lib ? fetch_lib(@_) : fetch_cmd(@_);
|
|
|
|
if ($ret->{commit} && quiet_run(qw(git-rev-parse --verify
|
|
|
|
refs/heads/master^0))) {
|
|
|
|
sys(qw(git-update-ref refs/heads/master),$ret->{commit});
|
|
|
|
}
|
|
|
|
return $ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub fetch_cmd {
|
|
|
|
my (@parents) = @_;
|
2006-02-16 17:24:16 +08:00
|
|
|
my @log_args = -d $SVN_WC ? ($SVN_WC) : ($SVN_URL);
|
2006-02-17 10:13:32 +08:00
|
|
|
unless ($_revision) {
|
|
|
|
$_revision = -d $SVN_WC ? 'BASE:HEAD' : '0:HEAD';
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
2006-02-17 10:13:32 +08:00
|
|
|
push @log_args, "-r$_revision";
|
2006-02-16 17:24:16 +08:00
|
|
|
push @log_args, '--stop-on-copy' unless $_no_stop_copy;
|
|
|
|
|
2006-02-21 02:57:28 +08:00
|
|
|
my $svn_log = svn_log_raw(@log_args);
|
2006-02-16 17:24:16 +08:00
|
|
|
|
2006-03-26 10:52:31 +08:00
|
|
|
my $base = next_log_entry($svn_log) or croak "No base revision!\n";
|
2006-06-13 19:02:23 +08:00
|
|
|
# don't need last_revision from grab_base_rev() because
|
|
|
|
# user could've specified a different revision to skip (they
|
|
|
|
# didn't want to import certain revisions into git for whatever
|
|
|
|
# reason, so trust $base->{revision} instead.
|
|
|
|
my (undef, $last_commit) = svn_grab_base_rev();
|
2006-02-16 17:24:16 +08:00
|
|
|
unless (-d $SVN_WC) {
|
2006-03-09 19:48:47 +08:00
|
|
|
svn_cmd_checkout($SVN_URL,$base->{revision},$SVN_WC);
|
2006-02-16 17:24:16 +08:00
|
|
|
chdir $SVN_WC or croak $!;
|
2006-03-09 19:48:47 +08:00
|
|
|
read_uuid();
|
2006-02-16 17:24:16 +08:00
|
|
|
$last_commit = git_commit($base, @parents);
|
2006-05-24 10:23:41 +08:00
|
|
|
assert_tree($last_commit);
|
2006-02-16 17:24:16 +08:00
|
|
|
} else {
|
|
|
|
chdir $SVN_WC or croak $!;
|
2006-03-09 19:48:47 +08:00
|
|
|
read_uuid();
|
2006-05-04 13:54:00 +08:00
|
|
|
# looks like a user manually cp'd and svn switch'ed
|
|
|
|
unless ($last_commit) {
|
|
|
|
sys(qw/svn revert -R ./);
|
|
|
|
assert_svn_wc_clean($base->{revision});
|
|
|
|
$last_commit = git_commit($base, @parents);
|
|
|
|
assert_tree($last_commit);
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
my @svn_up = qw(svn up);
|
|
|
|
push @svn_up, '--ignore-externals' unless $_no_ignore_ext;
|
2006-03-26 10:52:31 +08:00
|
|
|
my $last = $base;
|
|
|
|
while (my $log_msg = next_log_entry($svn_log)) {
|
|
|
|
if ($last->{revision} >= $log_msg->{revision}) {
|
|
|
|
croak "Out of order: last >= current: ",
|
|
|
|
"$last->{revision} >= $log_msg->{revision}\n";
|
|
|
|
}
|
2006-05-24 10:23:41 +08:00
|
|
|
# Revert is needed for cases like:
|
|
|
|
# https://svn.musicpd.org/Jamming/trunk (r166:167), but
|
|
|
|
# I can't seem to reproduce something like that on a test...
|
|
|
|
sys(qw/svn revert -R ./);
|
|
|
|
assert_svn_wc_clean($last->{revision});
|
2006-03-26 10:52:31 +08:00
|
|
|
sys(@svn_up,"-r$log_msg->{revision}");
|
2006-02-16 17:24:16 +08:00
|
|
|
$last_commit = git_commit($log_msg, $last_commit, @parents);
|
2006-03-26 10:52:31 +08:00
|
|
|
$last = $log_msg;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
2006-05-24 16:40:37 +08:00
|
|
|
close $svn_log->{fh};
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$last->{commit} = $last_commit;
|
2006-03-26 10:52:31 +08:00
|
|
|
return $last;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub fetch_lib {
|
|
|
|
my (@parents) = @_;
|
|
|
|
$SVN_URL ||= file_to_s("$GIT_SVN_DIR/info/url");
|
|
|
|
my $repo;
|
|
|
|
($repo, $SVN_PATH) = repo_path_split($SVN_URL);
|
|
|
|
$SVN_LOG ||= libsvn_connect($repo);
|
|
|
|
$SVN ||= libsvn_connect($repo);
|
|
|
|
my ($last_rev, $last_commit) = svn_grab_base_rev();
|
|
|
|
my ($base, $head) = libsvn_parse_revision($last_rev);
|
|
|
|
if ($base > $head) {
|
|
|
|
return { revision => $last_rev, commit => $last_commit }
|
|
|
|
}
|
|
|
|
my $index = set_index($GIT_SVN_INDEX);
|
|
|
|
|
|
|
|
# limit ourselves and also fork() since get_log won't release memory
|
|
|
|
# after processing a revision and SVN stuff seems to leak
|
|
|
|
my $inc = 1000;
|
|
|
|
my ($min, $max) = ($base, $head < $base+$inc ? $head : $base+$inc);
|
|
|
|
read_uuid();
|
|
|
|
if (defined $last_commit) {
|
|
|
|
unless (-e $GIT_SVN_INDEX) {
|
|
|
|
sys(qw/git-read-tree/, $last_commit);
|
|
|
|
}
|
|
|
|
chomp (my $x = `git-write-tree`);
|
|
|
|
my ($y) = (`git-cat-file commit $last_commit`
|
|
|
|
=~ /^tree ($sha1)/m);
|
|
|
|
if ($y ne $x) {
|
|
|
|
unlink $GIT_SVN_INDEX or croak $!;
|
|
|
|
sys(qw/git-read-tree/, $last_commit);
|
|
|
|
}
|
|
|
|
chomp ($x = `git-write-tree`);
|
|
|
|
if ($y ne $x) {
|
|
|
|
print STDERR "trees ($last_commit) $y != $x\n",
|
|
|
|
"Something is seriously wrong...\n";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
while (1) {
|
|
|
|
# fork, because using SVN::Pool with get_log() still doesn't
|
|
|
|
# seem to help enough to keep memory usage down.
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
$SVN::Error::handler = \&libsvn_skip_unknown_revs;
|
|
|
|
|
|
|
|
# Yes I'm perfectly aware that the fourth argument
|
|
|
|
# below is the limit revisions number. Unfortunately
|
|
|
|
# performance sucks with it enabled, so it's much
|
|
|
|
# faster to fetch revision ranges instead of relying
|
|
|
|
# on the limiter.
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
libsvn_get_log($SVN_LOG, '/'.$SVN_PATH,
|
|
|
|
$min, $max, 0, 1, 1,
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub {
|
|
|
|
my $log_msg;
|
|
|
|
if ($last_commit) {
|
|
|
|
$log_msg = libsvn_fetch(
|
|
|
|
$last_commit, @_);
|
|
|
|
$last_commit = git_commit(
|
|
|
|
$log_msg,
|
|
|
|
$last_commit,
|
|
|
|
@parents);
|
|
|
|
} else {
|
|
|
|
$log_msg = libsvn_new_tree(@_);
|
|
|
|
$last_commit = git_commit(
|
|
|
|
$log_msg, @parents);
|
|
|
|
}
|
|
|
|
});
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
($last_rev, $last_commit) = svn_grab_base_rev();
|
|
|
|
last if ($max >= $head);
|
|
|
|
$min = $max + 1;
|
|
|
|
$max += $inc;
|
|
|
|
$max = $head if ($max > $head);
|
|
|
|
}
|
|
|
|
restore_index($index);
|
|
|
|
return { revision => $last_rev, commit => $last_commit };
|
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
sub commit {
|
|
|
|
my (@commits) = @_;
|
2006-03-02 13:58:31 +08:00
|
|
|
check_upgrade_needed();
|
2006-02-16 17:24:16 +08:00
|
|
|
if ($_stdin || !@commits) {
|
|
|
|
print "Reading from stdin...\n";
|
|
|
|
@commits = ();
|
|
|
|
while (<STDIN>) {
|
2006-03-03 17:20:09 +08:00
|
|
|
if (/\b($sha1_short)\b/o) {
|
2006-02-16 17:24:16 +08:00
|
|
|
unshift @commits, $1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
my @revs;
|
2006-02-21 02:57:26 +08:00
|
|
|
foreach my $c (@commits) {
|
|
|
|
chomp(my @tmp = safe_qx('git-rev-parse',$c));
|
|
|
|
if (scalar @tmp == 1) {
|
|
|
|
push @revs, $tmp[0];
|
|
|
|
} elsif (scalar @tmp > 1) {
|
|
|
|
push @revs, reverse (safe_qx('git-rev-list',@tmp));
|
|
|
|
} else {
|
|
|
|
die "Failed to rev-parse $c\n";
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
chomp @revs;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$_use_lib ? commit_lib(@revs) : commit_cmd(@revs);
|
|
|
|
print "Done committing ",scalar @revs," revisions to SVN\n";
|
|
|
|
}
|
|
|
|
|
|
|
|
sub commit_cmd {
|
|
|
|
my (@revs) = @_;
|
2006-02-16 17:24:16 +08:00
|
|
|
|
2006-06-03 17:56:33 +08:00
|
|
|
chdir $SVN_WC or croak "Unable to chdir $SVN_WC: $!\n";
|
2006-03-09 19:48:47 +08:00
|
|
|
my $info = svn_info('.');
|
2006-06-03 17:56:33 +08:00
|
|
|
my $fetched = fetch();
|
|
|
|
if ($info->{Revision} != $fetched->{revision}) {
|
|
|
|
print STDERR "There are new revisions that were fetched ",
|
|
|
|
"and need to be merged (or acknowledged) ",
|
|
|
|
"before committing.\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
$info = svn_info('.');
|
2006-03-09 19:48:47 +08:00
|
|
|
read_uuid($info);
|
2006-06-13 19:02:23 +08:00
|
|
|
my $last = $fetched;
|
2006-02-16 17:24:16 +08:00
|
|
|
foreach my $c (@revs) {
|
2006-06-13 19:02:23 +08:00
|
|
|
my $mods = svn_checkout_tree($last, $c);
|
2006-02-21 02:57:28 +08:00
|
|
|
if (scalar @$mods == 0) {
|
|
|
|
print "Skipping, no changes detected\n";
|
|
|
|
next;
|
|
|
|
}
|
2006-06-13 19:02:23 +08:00
|
|
|
$last = svn_commit_tree($last, $c);
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub commit_lib {
|
|
|
|
my (@revs) = @_;
|
|
|
|
my ($r_last, $cmt_last) = svn_grab_base_rev();
|
|
|
|
defined $r_last or die "Must have an existing revision to commit\n";
|
2006-06-16 03:50:12 +08:00
|
|
|
my $fetched = fetch();
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if ($r_last != $fetched->{revision}) {
|
|
|
|
print STDERR "There are new revisions that were fetched ",
|
|
|
|
"and need to be merged (or acknowledged) ",
|
|
|
|
"before committing.\n",
|
|
|
|
"last rev: $r_last\n",
|
|
|
|
" current: $fetched->{revision}\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
read_uuid();
|
|
|
|
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef, 0) : ();
|
|
|
|
my $commit_msg = "$GIT_SVN_DIR/.svn-commit.tmp.$$";
|
|
|
|
|
2006-08-26 03:28:18 +08:00
|
|
|
my $repo;
|
|
|
|
($repo, $SVN_PATH) = repo_path_split($SVN_URL);
|
2006-06-28 10:39:12 +08:00
|
|
|
set_svn_commit_env();
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
foreach my $c (@revs) {
|
2006-06-22 16:22:46 +08:00
|
|
|
my $log_msg = get_commit_message($c, $commit_msg);
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
# fork for each commit because there's a memory leak I
|
|
|
|
# can't track down... (it's probably in the SVN code)
|
|
|
|
defined(my $pid = open my $fh, '-|') or croak $!;
|
|
|
|
if (!$pid) {
|
2006-08-26 03:28:18 +08:00
|
|
|
$SVN_LOG = libsvn_connect($repo);
|
|
|
|
$SVN = libsvn_connect($repo);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $ed = SVN::Git::Editor->new(
|
|
|
|
{ r => $r_last,
|
2006-10-14 17:02:37 +08:00
|
|
|
ra => $SVN_LOG,
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
c => $c,
|
|
|
|
svn_path => $SVN_PATH
|
|
|
|
},
|
|
|
|
$SVN->get_commit_editor(
|
|
|
|
$log_msg->{msg},
|
|
|
|
sub {
|
|
|
|
libsvn_commit_cb(
|
|
|
|
@_, $c,
|
|
|
|
$log_msg->{msg},
|
|
|
|
$r_last,
|
|
|
|
$cmt_last)
|
|
|
|
},
|
|
|
|
@lock)
|
|
|
|
);
|
2006-06-13 19:02:23 +08:00
|
|
|
my $mods = libsvn_checkout_tree($cmt_last, $c, $ed);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if (@$mods == 0) {
|
|
|
|
print "No changes\nr$r_last = $cmt_last\n";
|
|
|
|
$ed->abort_edit;
|
|
|
|
} else {
|
|
|
|
$ed->close_edit;
|
|
|
|
}
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
my ($r_new, $cmt_new, $no);
|
|
|
|
while (<$fh>) {
|
|
|
|
print $_;
|
|
|
|
chomp;
|
|
|
|
if (/^r(\d+) = ($sha1)$/o) {
|
|
|
|
($r_new, $cmt_new) = ($1, $2);
|
|
|
|
} elsif ($_ eq 'No changes') {
|
|
|
|
$no = 1;
|
|
|
|
}
|
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
close $fh or croak $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if (! defined $r_new && ! defined $cmt_new) {
|
|
|
|
unless ($no) {
|
|
|
|
die "Failed to parse revision information\n";
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
($r_last, $cmt_last) = ($r_new, $cmt_new);
|
|
|
|
}
|
|
|
|
}
|
2006-06-22 16:22:46 +08:00
|
|
|
$ENV{LC_ALL} = 'C';
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
unlink $commit_msg;
|
|
|
|
}
|
2006-02-26 18:22:27 +08:00
|
|
|
|
2006-08-26 15:01:23 +08:00
|
|
|
sub dcommit {
|
|
|
|
my $gs = "refs/remotes/$GIT_SVN";
|
|
|
|
chomp(my @refs = safe_qx(qw/git-rev-list --no-merges/, "$gs..HEAD"));
|
2006-11-09 17:19:37 +08:00
|
|
|
my $last_rev;
|
2006-08-26 15:01:23 +08:00
|
|
|
foreach my $d (reverse @refs) {
|
2006-11-24 06:54:03 +08:00
|
|
|
if (quiet_run('git-rev-parse','--verify',"$d~1") != 0) {
|
|
|
|
die "Commit $d\n",
|
|
|
|
"has no parent commit, and therefore ",
|
|
|
|
"nothing to diff against.\n",
|
|
|
|
"You should be working from a repository ",
|
|
|
|
"originally created by git-svn\n";
|
|
|
|
}
|
2006-11-09 17:19:37 +08:00
|
|
|
unless (defined $last_rev) {
|
|
|
|
(undef, $last_rev, undef) = cmt_metadata("$d~1");
|
|
|
|
unless (defined $last_rev) {
|
|
|
|
die "Unable to extract revision information ",
|
|
|
|
"from commit $d~1\n";
|
|
|
|
}
|
|
|
|
}
|
2006-08-26 15:01:23 +08:00
|
|
|
if ($_dry_run) {
|
|
|
|
print "diff-tree $d~1 $d\n";
|
|
|
|
} else {
|
2006-11-09 17:19:37 +08:00
|
|
|
if (my $r = commit_diff("$d~1", $d, undef, $last_rev)) {
|
|
|
|
$last_rev = $r;
|
|
|
|
} # else: no changes, same $last_rev
|
2006-08-26 15:01:23 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return if $_dry_run;
|
|
|
|
fetch();
|
|
|
|
my @diff = safe_qx(qw/git-diff-tree HEAD/, $gs);
|
|
|
|
my @finish;
|
|
|
|
if (@diff) {
|
|
|
|
@finish = qw/rebase/;
|
|
|
|
push @finish, qw/--merge/ if $_merge;
|
|
|
|
push @finish, "--strategy=$_strategy" if $_strategy;
|
|
|
|
print STDERR "W: HEAD and $gs differ, using @finish:\n", @diff;
|
|
|
|
} else {
|
|
|
|
print "No changes between current HEAD and $gs\n",
|
|
|
|
"Hard resetting to the latest $gs\n";
|
2006-11-24 06:54:05 +08:00
|
|
|
@finish = qw/reset --mixed/;
|
2006-08-26 15:01:23 +08:00
|
|
|
}
|
|
|
|
sys('git', @finish, $gs);
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub show_ignore {
|
2006-05-24 16:22:07 +08:00
|
|
|
$SVN_URL ||= file_to_s("$GIT_SVN_DIR/info/url");
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$_use_lib ? show_ignore_lib() : show_ignore_cmd();
|
|
|
|
}
|
|
|
|
|
|
|
|
sub show_ignore_cmd {
|
|
|
|
require File::Find or die $!;
|
|
|
|
if (defined $_revision) {
|
|
|
|
die "-r/--revision option doesn't work unless the Perl SVN ",
|
|
|
|
"libraries are used\n";
|
|
|
|
}
|
2006-02-26 18:22:27 +08:00
|
|
|
chdir $SVN_WC or croak $!;
|
|
|
|
my %ign;
|
|
|
|
File::Find::find({wanted=>sub{if(lstat $_ && -d _ && -d "$_/.svn"){
|
|
|
|
s#^\./##;
|
2006-05-29 06:23:56 +08:00
|
|
|
@{$ign{$_}} = svn_propget_base('svn:ignore', $_);
|
2006-02-26 18:22:27 +08:00
|
|
|
}}, no_chdir=>1},'.');
|
|
|
|
|
|
|
|
print "\n# /\n";
|
|
|
|
foreach (@{$ign{'.'}}) { print '/',$_ if /\S/ }
|
|
|
|
delete $ign{'.'};
|
|
|
|
foreach my $i (sort keys %ign) {
|
|
|
|
print "\n# ",$i,"\n";
|
|
|
|
foreach (@{$ign{$i}}) { print '/',$i,'/',$_ if /\S/ }
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub show_ignore_lib {
|
|
|
|
my $repo;
|
|
|
|
($repo, $SVN_PATH) = repo_path_split($SVN_URL);
|
|
|
|
$SVN ||= libsvn_connect($repo);
|
|
|
|
my $r = defined $_revision ? $_revision : $SVN->get_latest_revnum;
|
|
|
|
libsvn_traverse_ignore(\*STDOUT, $SVN_PATH, $r);
|
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
sub graft_branches {
|
|
|
|
my $gr_file = "$GIT_DIR/info/grafts";
|
|
|
|
my ($grafts, $comments) = read_grafts($gr_file);
|
|
|
|
my $gr_sha1;
|
|
|
|
|
|
|
|
if (%$grafts) {
|
|
|
|
# temporarily disable our grafts file to make this idempotent
|
|
|
|
chomp($gr_sha1 = safe_qx(qw/git-hash-object -w/,$gr_file));
|
|
|
|
rename $gr_file, "$gr_file~$gr_sha1" or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
my $l_map = read_url_paths();
|
|
|
|
my @re = map { qr/$_/is } @_opt_m if @_opt_m;
|
|
|
|
unless ($_no_default_regex) {
|
2006-06-28 10:39:11 +08:00
|
|
|
push @re, (qr/\b(?:merge|merging|merged)\s+with\s+([\w\.\-]+)/i,
|
|
|
|
qr/\b(?:merge|merging|merged)\s+([\w\.\-]+)/i,
|
|
|
|
qr/\b(?:from|of)\s+([\w\.\-]+)/i );
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
foreach my $u (keys %$l_map) {
|
|
|
|
if (@re) {
|
|
|
|
foreach my $p (keys %{$l_map->{$u}}) {
|
2006-06-28 10:39:11 +08:00
|
|
|
graft_merge_msg($grafts,$l_map,$u,$p,@re);
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
unless ($_no_graft_copy) {
|
|
|
|
if ($_use_lib) {
|
|
|
|
graft_file_copy_lib($grafts,$l_map,$u);
|
|
|
|
} else {
|
|
|
|
graft_file_copy_cmd($grafts,$l_map,$u);
|
|
|
|
}
|
|
|
|
}
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
2006-06-28 10:39:11 +08:00
|
|
|
graft_tree_joins($grafts);
|
2006-06-13 06:53:13 +08:00
|
|
|
|
|
|
|
write_grafts($grafts, $comments, $gr_file);
|
|
|
|
unlink "$gr_file~$gr_sha1" if $gr_sha1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub multi_init {
|
|
|
|
my $url = shift;
|
|
|
|
$_trunk ||= 'trunk';
|
|
|
|
$_trunk =~ s#/+$##;
|
|
|
|
$url =~ s#/+$## if $url;
|
|
|
|
if ($_trunk !~ m#^[a-z\+]+://#) {
|
|
|
|
$_trunk = '/' . $_trunk if ($_trunk !~ m#^/#);
|
|
|
|
unless ($url) {
|
|
|
|
print STDERR "E: '$_trunk' is not a complete URL ",
|
|
|
|
"and a separate URL is not specified\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
$_trunk = $url . $_trunk;
|
|
|
|
}
|
2006-10-12 02:53:21 +08:00
|
|
|
my $ch_id;
|
2006-06-13 06:53:13 +08:00
|
|
|
if ($GIT_SVN eq 'git-svn') {
|
2006-10-12 02:53:21 +08:00
|
|
|
$ch_id = 1;
|
2006-06-13 06:53:13 +08:00
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = 'trunk';
|
|
|
|
}
|
|
|
|
init_vars();
|
2006-10-12 02:53:21 +08:00
|
|
|
unless (-d $GIT_SVN_DIR) {
|
|
|
|
print "GIT_SVN_ID set to 'trunk' for $_trunk\n" if $ch_id;
|
|
|
|
init($_trunk);
|
|
|
|
sys('git-repo-config', 'svn.trunk', $_trunk);
|
|
|
|
}
|
2006-06-13 06:53:13 +08:00
|
|
|
complete_url_ls_init($url, $_branches, '--branches/-b', '');
|
|
|
|
complete_url_ls_init($url, $_tags, '--tags/-t', 'tags/');
|
|
|
|
}
|
|
|
|
|
|
|
|
sub multi_fetch {
|
|
|
|
# try to do trunk first, since branches/tags
|
|
|
|
# may be descended from it.
|
2006-06-16 03:50:12 +08:00
|
|
|
if (-e "$GIT_DIR/svn/trunk/info/url") {
|
|
|
|
fetch_child_id('trunk', @_);
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
rec_fetch('', "$GIT_DIR/svn", @_);
|
|
|
|
}
|
|
|
|
|
2006-06-01 17:35:44 +08:00
|
|
|
sub show_log {
|
|
|
|
my (@args) = @_;
|
|
|
|
my ($r_min, $r_max);
|
|
|
|
my $r_last = -1; # prevent dupes
|
|
|
|
rload_authors() if $_authors;
|
|
|
|
if (defined $TZ) {
|
|
|
|
$ENV{TZ} = $TZ;
|
|
|
|
} else {
|
|
|
|
delete $ENV{TZ};
|
|
|
|
}
|
|
|
|
if (defined $_revision) {
|
|
|
|
if ($_revision =~ /^(\d+):(\d+)$/) {
|
|
|
|
($r_min, $r_max) = ($1, $2);
|
|
|
|
} elsif ($_revision =~ /^\d+$/) {
|
|
|
|
$r_min = $r_max = $_revision;
|
|
|
|
} else {
|
|
|
|
print STDERR "-r$_revision is not supported, use ",
|
|
|
|
"standard \'git log\' arguments instead\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
my $pid = open(my $log,'-|');
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if (!$pid) {
|
2006-06-16 09:48:22 +08:00
|
|
|
exec(git_svn_log_cmd($r_min,$r_max), @args) or croak $!;
|
2006-06-01 17:35:44 +08:00
|
|
|
}
|
|
|
|
setup_pager();
|
|
|
|
my (@k, $c, $d);
|
2006-06-16 09:48:22 +08:00
|
|
|
|
2006-06-01 17:35:44 +08:00
|
|
|
while (<$log>) {
|
|
|
|
if (/^commit ($sha1_short)/o) {
|
|
|
|
my $cmt = $1;
|
2006-06-16 09:48:22 +08:00
|
|
|
if ($c && cmt_showable($c) && $c->{r} != $r_last) {
|
2006-06-01 17:35:44 +08:00
|
|
|
$r_last = $c->{r};
|
|
|
|
process_commit($c, $r_min, $r_max, \@k) or
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
$d = undef;
|
|
|
|
$c = { c => $cmt };
|
|
|
|
} elsif (/^author (.+) (\d+) ([\-\+]?\d+)$/) {
|
|
|
|
get_author_info($c, $1, $2, $3);
|
|
|
|
} elsif (/^(?:tree|parent|committer) /) {
|
|
|
|
# ignore
|
|
|
|
} elsif (/^:\d{6} \d{6} $sha1_short/o) {
|
|
|
|
push @{$c->{raw}}, $_;
|
2006-10-12 02:53:22 +08:00
|
|
|
} elsif (/^[ACRMDT]\t/) {
|
|
|
|
# we could add $SVN_PATH here, but that requires
|
|
|
|
# remote access at the moment (repo_path_split)...
|
|
|
|
s#^([ACRMDT])\t# $1 #;
|
|
|
|
push @{$c->{changed}}, $_;
|
2006-06-01 17:35:44 +08:00
|
|
|
} elsif (/^diff /) {
|
|
|
|
$d = 1;
|
|
|
|
push @{$c->{diff}}, $_;
|
|
|
|
} elsif ($d) {
|
|
|
|
push @{$c->{diff}}, $_;
|
|
|
|
} elsif (/^ (git-svn-id:.+)$/) {
|
2006-10-12 02:53:22 +08:00
|
|
|
($c->{url}, $c->{r}, undef) = extract_metadata($1);
|
2006-06-01 17:35:44 +08:00
|
|
|
} elsif (s/^ //) {
|
|
|
|
push @{$c->{l}}, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if ($c && defined $c->{r} && $c->{r} != $r_last) {
|
|
|
|
$r_last = $c->{r};
|
|
|
|
process_commit($c, $r_min, $r_max, \@k);
|
|
|
|
}
|
|
|
|
if (@k) {
|
|
|
|
my $swap = $r_max;
|
|
|
|
$r_max = $r_min;
|
|
|
|
$r_min = $swap;
|
|
|
|
process_commit($_, $r_min, $r_max) foreach reverse @k;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
close $log;
|
|
|
|
print '-' x72,"\n" unless $_incremental || $_oneline;
|
|
|
|
}
|
|
|
|
|
2006-06-28 10:39:12 +08:00
|
|
|
sub commit_diff_usage {
|
|
|
|
print STDERR "Usage: $0 commit-diff <tree-ish> <tree-ish> [<URL>]\n";
|
|
|
|
exit 1
|
|
|
|
}
|
|
|
|
|
|
|
|
sub commit_diff {
|
|
|
|
if (!$_use_lib) {
|
|
|
|
print STDERR "commit-diff must be used with SVN libraries\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
my $ta = shift or commit_diff_usage();
|
|
|
|
my $tb = shift or commit_diff_usage();
|
|
|
|
if (!eval { $SVN_URL = shift || file_to_s("$GIT_SVN_DIR/info/url") }) {
|
|
|
|
print STDERR "Needed URL or usable git-svn id command-line\n";
|
|
|
|
commit_diff_usage();
|
|
|
|
}
|
2006-11-24 06:54:04 +08:00
|
|
|
my $r = shift;
|
|
|
|
unless (defined $r) {
|
|
|
|
if (defined $_revision) {
|
|
|
|
$r = $_revision
|
|
|
|
} else {
|
|
|
|
die "-r|--revision is a required argument\n";
|
|
|
|
}
|
|
|
|
}
|
2006-06-28 10:39:12 +08:00
|
|
|
if (defined $_message && defined $_file) {
|
|
|
|
print STDERR "Both --message/-m and --file/-F specified ",
|
|
|
|
"for the commit message.\n",
|
|
|
|
"I have no idea what you mean\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
if (defined $_file) {
|
2006-07-10 11:20:48 +08:00
|
|
|
$_message = file_to_s($_file);
|
2006-06-28 10:39:12 +08:00
|
|
|
} else {
|
|
|
|
$_message ||= get_commit_message($tb,
|
|
|
|
"$GIT_DIR/.svn-commit.tmp.$$")->{msg};
|
|
|
|
}
|
|
|
|
my $repo;
|
|
|
|
($repo, $SVN_PATH) = repo_path_split($SVN_URL);
|
|
|
|
$SVN_LOG ||= libsvn_connect($repo);
|
|
|
|
$SVN ||= libsvn_connect($repo);
|
2006-11-09 17:19:37 +08:00
|
|
|
if ($r eq 'HEAD') {
|
|
|
|
$r = $SVN->get_latest_revnum;
|
|
|
|
} elsif ($r !~ /^\d+$/) {
|
|
|
|
die "revision argument: $r not understood by git-svn\n";
|
|
|
|
}
|
2006-06-28 10:39:12 +08:00
|
|
|
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef, 0) : ();
|
2006-11-09 17:19:37 +08:00
|
|
|
my $rev_committed;
|
|
|
|
my $ed = SVN::Git::Editor->new({ r => $r,
|
2006-10-14 17:02:37 +08:00
|
|
|
ra => $SVN_LOG, c => $tb,
|
2006-06-28 10:39:12 +08:00
|
|
|
svn_path => $SVN_PATH
|
|
|
|
},
|
|
|
|
$SVN->get_commit_editor($_message,
|
2006-11-09 17:19:37 +08:00
|
|
|
sub {
|
|
|
|
$rev_committed = $_[0];
|
|
|
|
print "Committed $_[0]\n";
|
|
|
|
}, @lock)
|
2006-06-28 10:39:12 +08:00
|
|
|
);
|
|
|
|
my $mods = libsvn_checkout_tree($ta, $tb, $ed);
|
|
|
|
if (@$mods == 0) {
|
|
|
|
print "No changes\n$ta == $tb\n";
|
|
|
|
$ed->abort_edit;
|
|
|
|
} else {
|
|
|
|
$ed->close_edit;
|
|
|
|
}
|
2006-08-27 00:52:25 +08:00
|
|
|
$_message = $_file = undef;
|
2006-11-09 17:19:37 +08:00
|
|
|
return $rev_committed;
|
2006-06-28 10:39:12 +08:00
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
########################### utility functions #########################
|
|
|
|
|
2006-06-16 09:48:22 +08:00
|
|
|
sub cmt_showable {
|
|
|
|
my ($c) = @_;
|
|
|
|
return 1 if defined $c->{r};
|
|
|
|
if ($c->{l} && $c->{l}->[-1] eq "...\n" &&
|
|
|
|
$c->{a_raw} =~ /\@([a-f\d\-]+)>$/) {
|
|
|
|
my @msg = safe_qx(qw/git-cat-file commit/, $c->{c});
|
|
|
|
shift @msg while ($msg[0] ne "\n");
|
|
|
|
shift @msg;
|
|
|
|
@{$c->{l}} = grep !/^git-svn-id: /, @msg;
|
|
|
|
|
|
|
|
(undef, $c->{r}, undef) = extract_metadata(
|
|
|
|
(grep(/^git-svn-id: /, @msg))[-1]);
|
|
|
|
}
|
|
|
|
return defined $c->{r};
|
|
|
|
}
|
|
|
|
|
|
|
|
sub git_svn_log_cmd {
|
|
|
|
my ($r_min, $r_max) = @_;
|
|
|
|
my @cmd = (qw/git-log --abbrev-commit --pretty=raw
|
|
|
|
--default/, "refs/remotes/$GIT_SVN");
|
2006-10-12 02:53:22 +08:00
|
|
|
push @cmd, '-r' unless $_non_recursive;
|
|
|
|
push @cmd, qw/--raw --name-status/ if $_verbose;
|
2006-06-16 09:48:22 +08:00
|
|
|
return @cmd unless defined $r_max;
|
|
|
|
if ($r_max == $r_min) {
|
|
|
|
push @cmd, '--max-count=1';
|
|
|
|
if (my $c = revdb_get($REVDB, $r_max)) {
|
|
|
|
push @cmd, $c;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
my ($c_min, $c_max);
|
|
|
|
$c_max = revdb_get($REVDB, $r_max);
|
|
|
|
$c_min = revdb_get($REVDB, $r_min);
|
2006-10-12 02:53:22 +08:00
|
|
|
if (defined $c_min && defined $c_max) {
|
2006-06-16 09:48:22 +08:00
|
|
|
if ($r_max > $r_max) {
|
|
|
|
push @cmd, "$c_min..$c_max";
|
|
|
|
} else {
|
|
|
|
push @cmd, "$c_max..$c_min";
|
|
|
|
}
|
|
|
|
} elsif ($r_max > $r_min) {
|
|
|
|
push @cmd, $c_max;
|
|
|
|
} else {
|
|
|
|
push @cmd, $c_min;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return @cmd;
|
|
|
|
}
|
|
|
|
|
2006-06-16 03:50:12 +08:00
|
|
|
sub fetch_child_id {
|
|
|
|
my $id = shift;
|
|
|
|
print "Fetching $id\n";
|
|
|
|
my $ref = "$GIT_DIR/refs/remotes/$id";
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
defined(my $pid = open my $fh, '-|') or croak $!;
|
2006-06-16 03:50:12 +08:00
|
|
|
if (!$pid) {
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
$_repack = undef;
|
2006-06-16 03:50:12 +08:00
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
|
|
|
fetch(@_);
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
while (<$fh>) {
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
print $_;
|
|
|
|
check_repack() if (/^r\d+ = $sha1/);
|
2006-06-16 03:50:12 +08:00
|
|
|
}
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
close $fh or croak $?;
|
2006-06-16 03:50:12 +08:00
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
sub rec_fetch {
|
|
|
|
my ($pfx, $p, @args) = @_;
|
|
|
|
my @dir;
|
|
|
|
foreach (sort <$p/*>) {
|
|
|
|
if (-r "$_/info/url") {
|
|
|
|
$pfx .= '/' if $pfx && $pfx !~ m!/$!;
|
|
|
|
my $id = $pfx . basename $_;
|
|
|
|
next if $id eq 'trunk';
|
2006-06-16 03:50:12 +08:00
|
|
|
fetch_child_id($id, @args);
|
2006-06-13 06:53:13 +08:00
|
|
|
} elsif (-d $_) {
|
|
|
|
push @dir, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (@dir) {
|
|
|
|
my $x = $_;
|
|
|
|
$x =~ s!^\Q$GIT_DIR\E/svn/!!;
|
|
|
|
rec_fetch($x, $_);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub complete_url_ls_init {
|
|
|
|
my ($url, $var, $switch, $pfx) = @_;
|
|
|
|
unless ($var) {
|
|
|
|
print STDERR "W: $switch not specified\n";
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
$var =~ s#/+$##;
|
|
|
|
if ($var !~ m#^[a-z\+]+://#) {
|
|
|
|
$var = '/' . $var if ($var !~ m#^/#);
|
|
|
|
unless ($url) {
|
|
|
|
print STDERR "E: '$var' is not a complete URL ",
|
|
|
|
"and a separate URL is not specified\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
$var = $url . $var;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
chomp(my @ls = $_use_lib ? libsvn_ls_fullurl($var)
|
|
|
|
: safe_qx(qw/svn ls --non-interactive/, $var));
|
2006-06-13 06:53:13 +08:00
|
|
|
my $old = $GIT_SVN;
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
foreach my $u (map { "$var/$_" } (grep m!/$!, @ls)) {
|
|
|
|
$u =~ s#/+$##;
|
|
|
|
if ($u !~ m!\Q$var\E/(.+)$!) {
|
|
|
|
print STDERR "W: Unrecognized URL: $u\n";
|
|
|
|
die "This should never happen\n";
|
|
|
|
}
|
2006-10-12 02:53:21 +08:00
|
|
|
# don't try to init already existing refs
|
2006-06-13 06:53:13 +08:00
|
|
|
my $id = $pfx.$1;
|
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
2006-10-12 02:53:21 +08:00
|
|
|
unless (-d $GIT_SVN_DIR) {
|
|
|
|
print "init $u => $id\n";
|
|
|
|
init($u);
|
|
|
|
}
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
2006-10-12 02:53:21 +08:00
|
|
|
my ($n) = ($switch =~ /^--(\w+)/);
|
|
|
|
sys('git-repo-config', "svn.$n", $var);
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub common_prefix {
|
|
|
|
my $paths = shift;
|
|
|
|
my %common;
|
|
|
|
foreach (@$paths) {
|
|
|
|
my @tmp = split m#/#, $_;
|
|
|
|
my $p = '';
|
|
|
|
while (my $x = shift @tmp) {
|
|
|
|
$p .= "/$x";
|
|
|
|
$common{$p} ||= 0;
|
|
|
|
$common{$p}++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (sort {length $b <=> length $a} keys %common) {
|
|
|
|
if ($common{$_} == @$paths) {
|
|
|
|
return $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return '';
|
|
|
|
}
|
|
|
|
|
2006-06-28 10:39:11 +08:00
|
|
|
# grafts set here are 'stronger' in that they're based on actual tree
|
|
|
|
# matches, and won't be deleted from merge-base checking in write_grafts()
|
|
|
|
sub graft_tree_joins {
|
|
|
|
my $grafts = shift;
|
|
|
|
map_tree_joins() if (@_branch_from && !%tree_map);
|
|
|
|
return unless %tree_map;
|
|
|
|
|
|
|
|
git_svn_each(sub {
|
|
|
|
my $i = shift;
|
|
|
|
defined(my $pid = open my $fh, '-|') or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
exec qw/git-rev-list --pretty=raw/,
|
|
|
|
"refs/remotes/$i" or croak $!;
|
|
|
|
}
|
|
|
|
while (<$fh>) {
|
|
|
|
next unless /^commit ($sha1)$/o;
|
|
|
|
my $c = $1;
|
|
|
|
my ($t) = (<$fh> =~ /^tree ($sha1)$/o);
|
|
|
|
next unless $tree_map{$t};
|
|
|
|
|
|
|
|
my $l;
|
|
|
|
do {
|
|
|
|
$l = readline $fh;
|
|
|
|
} until ($l =~ /^committer (?:.+) (\d+) ([\-\+]?\d+)$/);
|
|
|
|
|
|
|
|
my ($s, $tz) = ($1, $2);
|
|
|
|
if ($tz =~ s/^\+//) {
|
|
|
|
$s += tz_to_s_offset($tz);
|
|
|
|
} elsif ($tz =~ s/^\-//) {
|
|
|
|
$s -= tz_to_s_offset($tz);
|
|
|
|
}
|
|
|
|
|
|
|
|
my ($url_a, $r_a, $uuid_a) = cmt_metadata($c);
|
|
|
|
|
|
|
|
foreach my $p (@{$tree_map{$t}}) {
|
|
|
|
next if $p eq $c;
|
|
|
|
my $mb = eval {
|
|
|
|
safe_qx('git-merge-base', $c, $p)
|
|
|
|
};
|
|
|
|
next unless ($@ || $?);
|
|
|
|
if (defined $r_a) {
|
|
|
|
# see if SVN says it's a relative
|
|
|
|
my ($url_b, $r_b, $uuid_b) =
|
|
|
|
cmt_metadata($p);
|
|
|
|
next if (defined $url_b &&
|
|
|
|
defined $url_a &&
|
|
|
|
($url_a eq $url_b) &&
|
|
|
|
($uuid_a eq $uuid_b));
|
|
|
|
if ($uuid_a eq $uuid_b) {
|
|
|
|
if ($r_b < $r_a) {
|
|
|
|
$grafts->{$c}->{$p} = 2;
|
|
|
|
next;
|
|
|
|
} elsif ($r_b > $r_a) {
|
|
|
|
$grafts->{$p}->{$c} = 2;
|
|
|
|
next;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
my $ct = get_commit_time($p);
|
|
|
|
if ($ct < $s) {
|
|
|
|
$grafts->{$c}->{$p} = 2;
|
|
|
|
} elsif ($ct > $s) {
|
|
|
|
$grafts->{$p}->{$c} = 2;
|
|
|
|
}
|
|
|
|
# what should we do when $ct == $s ?
|
|
|
|
}
|
|
|
|
}
|
|
|
|
close $fh or croak $?;
|
|
|
|
});
|
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
# this isn't funky-filename safe, but good enough for now...
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub graft_file_copy_cmd {
|
2006-06-13 06:53:13 +08:00
|
|
|
my ($grafts, $l_map, $u) = @_;
|
|
|
|
my $paths = $l_map->{$u};
|
|
|
|
my $pfx = common_prefix([keys %$paths]);
|
2006-06-13 19:02:23 +08:00
|
|
|
$SVN_URL ||= $u.$pfx;
|
2006-06-13 06:53:13 +08:00
|
|
|
my $pid = open my $fh, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
unless ($pid) {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my @exec = qw/svn log -v/;
|
|
|
|
push @exec, "-r$_revision" if defined $_revision;
|
|
|
|
exec @exec, $u.$pfx or croak $!;
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
my ($r, $mp) = (undef, undef);
|
|
|
|
while (<$fh>) {
|
|
|
|
chomp;
|
|
|
|
if (/^\-{72}$/) {
|
|
|
|
$mp = $r = undef;
|
|
|
|
} elsif (/^r(\d+) \| /) {
|
|
|
|
$r = $1 unless defined $r;
|
|
|
|
} elsif (/^Changed paths:/) {
|
|
|
|
$mp = 1;
|
|
|
|
} elsif ($mp && m#^ [AR] /(\S.*?) \(from /(\S+?):(\d+)\)$#) {
|
|
|
|
my ($p1, $p0, $r0) = ($1, $2, $3);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $c = find_graft_path_commit($paths, $p1, $r);
|
2006-06-13 06:53:13 +08:00
|
|
|
next unless $c;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
find_graft_path_parents($grafts, $paths, $c, $p0, $r0);
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub graft_file_copy_lib {
|
|
|
|
my ($grafts, $l_map, $u) = @_;
|
|
|
|
my $tree_paths = $l_map->{$u};
|
|
|
|
my $pfx = common_prefix([keys %$tree_paths]);
|
|
|
|
my ($repo, $path) = repo_path_split($u.$pfx);
|
|
|
|
$SVN_LOG ||= libsvn_connect($repo);
|
|
|
|
$SVN ||= libsvn_connect($repo);
|
|
|
|
|
|
|
|
my ($base, $head) = libsvn_parse_revision();
|
|
|
|
my $inc = 1000;
|
|
|
|
my ($min, $max) = ($base, $head < $base+$inc ? $head : $base+$inc);
|
2006-06-13 19:02:23 +08:00
|
|
|
my $eh = $SVN::Error::handler;
|
|
|
|
$SVN::Error::handler = \&libsvn_skip_unknown_revs;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
while (1) {
|
|
|
|
my $pool = SVN::Pool->new;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
libsvn_get_log($SVN_LOG, "/$path", $min, $max, 0, 1, 1,
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub {
|
|
|
|
libsvn_graft_file_copies($grafts, $tree_paths,
|
|
|
|
$path, @_);
|
|
|
|
}, $pool);
|
|
|
|
$pool->clear;
|
|
|
|
last if ($max >= $head);
|
|
|
|
$min = $max + 1;
|
|
|
|
$max += $inc;
|
|
|
|
$max = $head if ($max > $head);
|
|
|
|
}
|
2006-06-13 19:02:23 +08:00
|
|
|
$SVN::Error::handler = $eh;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
sub process_merge_msg_matches {
|
|
|
|
my ($grafts, $l_map, $u, $p, $c, @matches) = @_;
|
|
|
|
my (@strong, @weak);
|
|
|
|
foreach (@matches) {
|
|
|
|
# merging with ourselves is not interesting
|
|
|
|
next if $_ eq $p;
|
|
|
|
if ($l_map->{$u}->{$_}) {
|
|
|
|
push @strong, $_;
|
|
|
|
} else {
|
|
|
|
push @weak, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach my $w (@weak) {
|
|
|
|
last if @strong;
|
|
|
|
# no exact match, use branch name as regexp.
|
|
|
|
my $re = qr/\Q$w\E/i;
|
|
|
|
foreach (keys %{$l_map->{$u}}) {
|
|
|
|
if (/$re/) {
|
2006-06-28 10:39:11 +08:00
|
|
|
push @strong, $l_map->{$u}->{$_};
|
2006-06-13 06:53:13 +08:00
|
|
|
last;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
last if @strong;
|
|
|
|
$w = basename($w);
|
|
|
|
$re = qr/\Q$w\E/i;
|
|
|
|
foreach (keys %{$l_map->{$u}}) {
|
|
|
|
if (/$re/) {
|
2006-06-28 10:39:11 +08:00
|
|
|
push @strong, $l_map->{$u}->{$_};
|
2006-06-13 06:53:13 +08:00
|
|
|
last;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
my ($rev) = ($c->{m} =~ /^git-svn-id:\s(?:\S+?)\@(\d+)
|
|
|
|
\s(?:[a-f\d\-]+)$/xsm);
|
|
|
|
unless (defined $rev) {
|
|
|
|
($rev) = ($c->{m} =~/^git-svn-id:\s(\d+)
|
|
|
|
\@(?:[a-f\d\-]+)/xsm);
|
|
|
|
return unless defined $rev;
|
|
|
|
}
|
|
|
|
foreach my $m (@strong) {
|
2006-06-28 10:39:11 +08:00
|
|
|
my ($r0, $s0) = find_rev_before($rev, $m, 1);
|
2006-06-13 06:53:13 +08:00
|
|
|
$grafts->{$c->{c}}->{$s0} = 1 if defined $s0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub graft_merge_msg {
|
|
|
|
my ($grafts, $l_map, $u, $p, @re) = @_;
|
|
|
|
|
|
|
|
my $x = $l_map->{$u}->{$p};
|
|
|
|
my $rl = rev_list_raw($x);
|
|
|
|
while (my $c = next_rev_list_entry($rl)) {
|
|
|
|
foreach my $re (@re) {
|
|
|
|
my (@br) = ($c->{m} =~ /$re/g);
|
|
|
|
next unless @br;
|
|
|
|
process_merge_msg_matches($grafts,$l_map,$u,$p,$c,@br);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-03-09 19:48:47 +08:00
|
|
|
sub read_uuid {
|
|
|
|
return if $SVN_UUID;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if ($_use_lib) {
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
$SVN_UUID = $SVN->get_uuid($pool);
|
|
|
|
$pool->clear;
|
|
|
|
} else {
|
|
|
|
my $info = shift || svn_info('.');
|
|
|
|
$SVN_UUID = $info->{'Repository UUID'} or
|
2006-03-09 19:48:47 +08:00
|
|
|
croak "Repository UUID unreadable\n";
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
2006-05-24 16:22:07 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub quiet_run {
|
|
|
|
my $pid = fork;
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
open my $null, '>', '/dev/null' or croak $!;
|
|
|
|
open STDERR, '>&', $null or croak $!;
|
|
|
|
open STDOUT, '>&', $null or croak $!;
|
|
|
|
exec @_ or croak $!;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
return $?;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub repo_path_split {
|
|
|
|
my $full_url = shift;
|
|
|
|
$full_url =~ s#/+$##;
|
|
|
|
|
|
|
|
foreach (@repo_path_split_cache) {
|
|
|
|
if ($full_url =~ s#$_##) {
|
|
|
|
my $u = $1;
|
|
|
|
$full_url =~ s#^/+##;
|
|
|
|
return ($u, $full_url);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if ($_use_lib) {
|
2006-08-12 14:21:41 +08:00
|
|
|
my $tmp = libsvn_connect($full_url);
|
|
|
|
my $url = $tmp->get_repos_root;
|
|
|
|
$full_url =~ s#^\Q$url\E/*##;
|
|
|
|
push @repo_path_split_cache, qr/^(\Q$url\E)/;
|
|
|
|
return ($url, $full_url);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
} else {
|
2006-08-12 14:21:41 +08:00
|
|
|
my ($url, $path) = ($full_url =~ m!^([a-z\+]+://[^/]*)(.*)$!i);
|
|
|
|
$path =~ s#^/+##;
|
|
|
|
my @paths = split(m#/+#, $path);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
while (quiet_run(qw/svn ls --non-interactive/, $url)) {
|
|
|
|
my $n = shift @paths || last;
|
|
|
|
$url .= "/$n";
|
|
|
|
}
|
2006-08-12 14:21:41 +08:00
|
|
|
push @repo_path_split_cache, qr/^(\Q$url\E)/;
|
|
|
|
$path = join('/',@paths);
|
|
|
|
return ($url, $path);
|
2006-05-24 16:22:07 +08:00
|
|
|
}
|
2006-03-09 19:48:47 +08:00
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
sub setup_git_svn {
|
|
|
|
defined $SVN_URL or croak "SVN repository location required\n";
|
|
|
|
unless (-d $GIT_DIR) {
|
|
|
|
croak "GIT_DIR=$GIT_DIR does not exist!\n";
|
|
|
|
}
|
2006-05-24 16:22:07 +08:00
|
|
|
mkpath([$GIT_SVN_DIR]);
|
|
|
|
mkpath(["$GIT_SVN_DIR/info"]);
|
2006-06-13 19:02:23 +08:00
|
|
|
open my $fh, '>>',$REVDB or croak $!;
|
|
|
|
close $fh;
|
2006-05-24 16:22:07 +08:00
|
|
|
s_to_file($SVN_URL,"$GIT_SVN_DIR/info/url");
|
2006-02-16 17:24:16 +08:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
sub assert_svn_wc_clean {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return if $_use_lib;
|
2006-05-24 10:23:41 +08:00
|
|
|
my ($svn_rev) = @_;
|
2006-02-16 17:24:16 +08:00
|
|
|
croak "$svn_rev is not an integer!\n" unless ($svn_rev =~ /^\d+$/);
|
2006-03-04 05:35:48 +08:00
|
|
|
my $lcr = svn_info('.')->{'Last Changed Rev'};
|
|
|
|
if ($svn_rev != $lcr) {
|
|
|
|
print STDERR "Checking for copy-tree ... ";
|
|
|
|
my @diff = grep(/^Index: /,(safe_qx(qw(svn diff),
|
|
|
|
"-r$lcr:$svn_rev")));
|
|
|
|
if (@diff) {
|
|
|
|
croak "Nope! Expected r$svn_rev, got r$lcr\n";
|
|
|
|
} else {
|
|
|
|
print STDERR "OK!\n";
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
my @status = grep(!/^Performing status on external/,(`svn status`));
|
|
|
|
@status = grep(!/^\s*$/,@status);
|
2006-09-26 17:42:55 +08:00
|
|
|
@status = grep(!/^X/,@status) if $_no_ignore_ext;
|
2006-02-16 17:24:16 +08:00
|
|
|
if (scalar @status) {
|
|
|
|
print STDERR "Tree ($SVN_WC) is not clean:\n";
|
|
|
|
print STDERR $_ foreach @status;
|
|
|
|
croak;
|
|
|
|
}
|
2006-02-21 02:57:28 +08:00
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub get_tree_from_treeish {
|
2006-02-21 02:57:28 +08:00
|
|
|
my ($treeish) = @_;
|
|
|
|
croak "Not a sha1: $treeish\n" unless $treeish =~ /^$sha1$/o;
|
|
|
|
chomp(my $type = `git-cat-file -t $treeish`);
|
|
|
|
my $expected;
|
|
|
|
while ($type eq 'tag') {
|
|
|
|
chomp(($treeish, $type) = `git-cat-file tag $treeish`);
|
|
|
|
}
|
|
|
|
if ($type eq 'commit') {
|
|
|
|
$expected = (grep /^tree /,`git-cat-file commit $treeish`)[0];
|
|
|
|
($expected) = ($expected =~ /^tree ($sha1)$/);
|
|
|
|
die "Unable to get tree from $treeish\n" unless $expected;
|
|
|
|
} elsif ($type eq 'tree') {
|
|
|
|
$expected = $treeish;
|
|
|
|
} else {
|
|
|
|
die "$treeish is a $type, expected tree, tag or commit\n";
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return $expected;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub assert_tree {
|
|
|
|
return if $_use_lib;
|
|
|
|
my ($treeish) = @_;
|
|
|
|
my $expected = get_tree_from_treeish($treeish);
|
2006-02-21 02:57:28 +08:00
|
|
|
|
|
|
|
my $tmpindex = $GIT_SVN_INDEX.'.assert-tmp';
|
|
|
|
if (-e $tmpindex) {
|
|
|
|
unlink $tmpindex or croak $!;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $old_index = set_index($tmpindex);
|
2006-05-24 10:23:41 +08:00
|
|
|
index_changes(1);
|
2006-02-21 02:57:28 +08:00
|
|
|
chomp(my $tree = `git-write-tree`);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
restore_index($old_index);
|
2006-02-21 02:57:28 +08:00
|
|
|
if ($tree ne $expected) {
|
|
|
|
croak "Tree mismatch, Got: $tree, Expected: $expected\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
2006-05-24 10:23:41 +08:00
|
|
|
unlink $tmpindex;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub parse_diff_tree {
|
|
|
|
my $diff_fh = shift;
|
|
|
|
local $/ = "\0";
|
|
|
|
my $state = 'meta';
|
|
|
|
my @mods;
|
|
|
|
while (<$diff_fh>) {
|
|
|
|
chomp $_; # this gets rid of the trailing "\0"
|
|
|
|
if ($state eq 'meta' && /^:(\d{6})\s(\d{6})\s
|
|
|
|
$sha1\s($sha1)\s([MTCRAD])\d*$/xo) {
|
|
|
|
push @mods, { mode_a => $1, mode_b => $2,
|
|
|
|
sha1_b => $3, chg => $4 };
|
|
|
|
if ($4 =~ /^(?:C|R)$/) {
|
|
|
|
$state = 'file_a';
|
|
|
|
} else {
|
|
|
|
$state = 'file_b';
|
|
|
|
}
|
|
|
|
} elsif ($state eq 'file_a') {
|
2006-02-21 02:57:28 +08:00
|
|
|
my $x = $mods[$#mods] or croak "Empty array\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
if ($x->{chg} !~ /^(?:C|R)$/) {
|
2006-02-21 02:57:28 +08:00
|
|
|
croak "Error parsing $_, $x->{chg}\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
$x->{file_a} = $_;
|
|
|
|
$state = 'file_b';
|
|
|
|
} elsif ($state eq 'file_b') {
|
2006-02-21 02:57:28 +08:00
|
|
|
my $x = $mods[$#mods] or croak "Empty array\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
if (exists $x->{file_a} && $x->{chg} !~ /^(?:C|R)$/) {
|
2006-02-21 02:57:28 +08:00
|
|
|
croak "Error parsing $_, $x->{chg}\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
if (!exists $x->{file_a} && $x->{chg} =~ /^(?:C|R)$/) {
|
2006-02-21 02:57:28 +08:00
|
|
|
croak "Error parsing $_, $x->{chg}\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
$x->{file_b} = $_;
|
|
|
|
$state = 'meta';
|
|
|
|
} else {
|
2006-02-21 02:57:28 +08:00
|
|
|
croak "Error parsing $_\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
close $diff_fh or croak $?;
|
2006-02-21 02:57:28 +08:00
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
return \@mods;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub svn_check_prop_executable {
|
|
|
|
my $m = shift;
|
2006-02-21 02:57:28 +08:00
|
|
|
return if -l $m->{file_b};
|
|
|
|
if ($m->{mode_b} =~ /755$/) {
|
|
|
|
chmod((0755 &~ umask),$m->{file_b}) or croak $!;
|
|
|
|
if ($m->{mode_a} !~ /755$/) {
|
|
|
|
sys(qw(svn propset svn:executable 1), $m->{file_b});
|
|
|
|
}
|
|
|
|
-x $m->{file_b} or croak "$m->{file_b} is not executable!\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
} elsif ($m->{mode_b} !~ /755$/ && $m->{mode_a} =~ /755$/) {
|
|
|
|
sys(qw(svn propdel svn:executable), $m->{file_b});
|
2006-02-21 02:57:28 +08:00
|
|
|
chmod((0644 &~ umask),$m->{file_b}) or croak $!;
|
|
|
|
-x $m->{file_b} and croak "$m->{file_b} is executable!\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub svn_ensure_parent_path {
|
|
|
|
my $dir_b = dirname(shift);
|
|
|
|
svn_ensure_parent_path($dir_b) if ($dir_b ne File::Spec->curdir);
|
|
|
|
mkpath([$dir_b]) unless (-d $dir_b);
|
|
|
|
sys(qw(svn add -N), $dir_b) unless (-d "$dir_b/.svn");
|
|
|
|
}
|
|
|
|
|
2006-02-21 02:57:28 +08:00
|
|
|
sub precommit_check {
|
|
|
|
my $mods = shift;
|
|
|
|
my (%rm_file, %rmdir_check, %added_check);
|
|
|
|
|
|
|
|
my %o = ( D => 0, R => 1, C => 2, A => 3, M => 3, T => 3 );
|
|
|
|
foreach my $m (sort { $o{$a->{chg}} <=> $o{$b->{chg}} } @$mods) {
|
|
|
|
if ($m->{chg} eq 'R') {
|
|
|
|
if (-d $m->{file_b}) {
|
|
|
|
err_dir_to_file("$m->{file_a} => $m->{file_b}");
|
|
|
|
}
|
|
|
|
# dir/$file => dir/file/$file
|
|
|
|
my $dirname = dirname($m->{file_b});
|
|
|
|
while ($dirname ne File::Spec->curdir) {
|
|
|
|
if ($dirname ne $m->{file_a}) {
|
|
|
|
$dirname = dirname($dirname);
|
|
|
|
next;
|
|
|
|
}
|
|
|
|
err_file_to_dir("$m->{file_a} => $m->{file_b}");
|
|
|
|
}
|
|
|
|
# baz/zzz => baz (baz is a file)
|
|
|
|
$dirname = dirname($m->{file_a});
|
|
|
|
while ($dirname ne File::Spec->curdir) {
|
|
|
|
if ($dirname ne $m->{file_b}) {
|
|
|
|
$dirname = dirname($dirname);
|
|
|
|
next;
|
|
|
|
}
|
|
|
|
err_dir_to_file("$m->{file_a} => $m->{file_b}");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if ($m->{chg} =~ /^(D|R)$/) {
|
|
|
|
my $t = $1 eq 'D' ? 'file_b' : 'file_a';
|
|
|
|
$rm_file{ $m->{$t} } = 1;
|
|
|
|
my $dirname = dirname( $m->{$t} );
|
|
|
|
my $basename = basename( $m->{$t} );
|
|
|
|
$rmdir_check{$dirname}->{$basename} = 1;
|
|
|
|
} elsif ($m->{chg} =~ /^(?:A|C)$/) {
|
|
|
|
if (-d $m->{file_b}) {
|
|
|
|
err_dir_to_file($m->{file_b});
|
|
|
|
}
|
|
|
|
my $dirname = dirname( $m->{file_b} );
|
|
|
|
my $basename = basename( $m->{file_b} );
|
|
|
|
$added_check{$dirname}->{$basename} = 1;
|
|
|
|
while ($dirname ne File::Spec->curdir) {
|
|
|
|
if ($rm_file{$dirname}) {
|
|
|
|
err_file_to_dir($m->{file_b});
|
|
|
|
}
|
|
|
|
$dirname = dirname $dirname;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return (\%rmdir_check, \%added_check);
|
|
|
|
|
|
|
|
sub err_dir_to_file {
|
|
|
|
my $file = shift;
|
|
|
|
print STDERR "Node change from directory to file ",
|
|
|
|
"is not supported by Subversion: ",$file,"\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
sub err_file_to_dir {
|
|
|
|
my $file = shift;
|
|
|
|
print STDERR "Node change from file to directory ",
|
|
|
|
"is not supported by Subversion: ",$file,"\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
|
|
|
|
sub get_diff {
|
2006-06-13 19:02:23 +08:00
|
|
|
my ($from, $treeish) = @_;
|
2006-05-24 10:23:41 +08:00
|
|
|
assert_tree($from);
|
2006-03-03 17:20:07 +08:00
|
|
|
print "diff-tree $from $treeish\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
my $pid = open my $diff_fh, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if ($pid == 0) {
|
2006-05-15 11:00:00 +08:00
|
|
|
my @diff_tree = qw(git-diff-tree -z -r);
|
|
|
|
if ($_cp_similarity) {
|
|
|
|
push @diff_tree, "-C$_cp_similarity";
|
|
|
|
} else {
|
|
|
|
push @diff_tree, '-C';
|
|
|
|
}
|
2006-02-21 02:57:26 +08:00
|
|
|
push @diff_tree, '--find-copies-harder' if $_find_copies_harder;
|
|
|
|
push @diff_tree, "-l$_l" if defined $_l;
|
2006-02-21 02:57:28 +08:00
|
|
|
exec(@diff_tree, $from, $treeish) or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return parse_diff_tree($diff_fh);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub svn_checkout_tree {
|
2006-06-13 19:02:23 +08:00
|
|
|
my ($from, $treeish) = @_;
|
|
|
|
my $mods = get_diff($from->{commit}, $treeish);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return $mods unless (scalar @$mods);
|
2006-02-21 02:57:28 +08:00
|
|
|
my ($rm, $add) = precommit_check($mods);
|
|
|
|
|
|
|
|
my %o = ( D => 1, R => 0, C => -1, A => 3, M => 3, T => 3 );
|
|
|
|
foreach my $m (sort { $o{$a->{chg}} <=> $o{$b->{chg}} } @$mods) {
|
2006-02-16 17:24:16 +08:00
|
|
|
if ($m->{chg} eq 'C') {
|
|
|
|
svn_ensure_parent_path( $m->{file_b} );
|
|
|
|
sys(qw(svn cp), $m->{file_a}, $m->{file_b});
|
2006-02-21 02:57:28 +08:00
|
|
|
apply_mod_line_blob($m);
|
2006-02-16 17:24:16 +08:00
|
|
|
svn_check_prop_executable($m);
|
|
|
|
} elsif ($m->{chg} eq 'D') {
|
|
|
|
sys(qw(svn rm --force), $m->{file_b});
|
|
|
|
} elsif ($m->{chg} eq 'R') {
|
|
|
|
svn_ensure_parent_path( $m->{file_b} );
|
|
|
|
sys(qw(svn mv --force), $m->{file_a}, $m->{file_b});
|
2006-02-21 02:57:28 +08:00
|
|
|
apply_mod_line_blob($m);
|
2006-02-16 17:24:16 +08:00
|
|
|
svn_check_prop_executable($m);
|
|
|
|
} elsif ($m->{chg} eq 'M') {
|
2006-02-21 02:57:28 +08:00
|
|
|
apply_mod_line_blob($m);
|
2006-02-16 17:24:16 +08:00
|
|
|
svn_check_prop_executable($m);
|
|
|
|
} elsif ($m->{chg} eq 'T') {
|
2006-02-21 02:57:28 +08:00
|
|
|
svn_check_prop_executable($m);
|
2006-10-24 17:50:37 +08:00
|
|
|
apply_mod_line_blob($m);
|
|
|
|
if ($m->{mode_a} =~ /^120/ && $m->{mode_b} !~ /^120/) {
|
|
|
|
sys(qw(svn propdel svn:special), $m->{file_b});
|
|
|
|
} else {
|
|
|
|
sys(qw(svn propset svn:special *),$m->{file_b});
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
} elsif ($m->{chg} eq 'A') {
|
|
|
|
svn_ensure_parent_path( $m->{file_b} );
|
2006-02-21 02:57:28 +08:00
|
|
|
apply_mod_line_blob($m);
|
2006-06-16 10:51:05 +08:00
|
|
|
sys(qw(svn add), $m->{file_b});
|
2006-02-21 02:57:28 +08:00
|
|
|
svn_check_prop_executable($m);
|
2006-02-16 17:24:16 +08:00
|
|
|
} else {
|
|
|
|
croak "Invalid chg: $m->{chg}\n";
|
|
|
|
}
|
|
|
|
}
|
2006-02-21 02:57:28 +08:00
|
|
|
|
|
|
|
assert_tree($treeish);
|
|
|
|
if ($_rmdir) { # remove empty directories
|
|
|
|
handle_rmdir($rm, $add);
|
|
|
|
}
|
|
|
|
assert_tree($treeish);
|
|
|
|
return $mods;
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub libsvn_checkout_tree {
|
2006-06-13 19:02:23 +08:00
|
|
|
my ($from, $treeish, $ed) = @_;
|
|
|
|
my $mods = get_diff($from, $treeish);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return $mods unless (scalar @$mods);
|
|
|
|
my %o = ( D => 1, R => 0, C => -1, A => 3, M => 3, T => 3 );
|
|
|
|
foreach my $m (sort { $o{$a->{chg}} <=> $o{$b->{chg}} } @$mods) {
|
|
|
|
my $f = $m->{chg};
|
|
|
|
if (defined $o{$f}) {
|
2006-06-28 10:39:14 +08:00
|
|
|
$ed->$f($m, $_q);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
} else {
|
|
|
|
croak "Invalid change type: $f\n";
|
|
|
|
}
|
|
|
|
}
|
2006-06-28 10:39:14 +08:00
|
|
|
$ed->rmdirs($_q) if $_rmdir;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return $mods;
|
|
|
|
}
|
|
|
|
|
2006-02-21 02:57:28 +08:00
|
|
|
# svn ls doesn't work with respect to the current working tree, but what's
|
|
|
|
# in the repository. There's not even an option for it... *sigh*
|
|
|
|
# (added files don't show up and removed files remain in the ls listing)
|
|
|
|
sub svn_ls_current {
|
|
|
|
my ($dir, $rm, $add) = @_;
|
|
|
|
chomp(my @ls = safe_qx('svn','ls',$dir));
|
|
|
|
my @ret = ();
|
|
|
|
foreach (@ls) {
|
|
|
|
s#/$##; # trailing slashes are evil
|
|
|
|
push @ret, $_ unless $rm->{$dir}->{$_};
|
|
|
|
}
|
|
|
|
if (exists $add->{$dir}) {
|
|
|
|
push @ret, keys %{$add->{$dir}};
|
|
|
|
}
|
|
|
|
return \@ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub handle_rmdir {
|
|
|
|
my ($rm, $add) = @_;
|
|
|
|
|
|
|
|
foreach my $dir (sort {length $b <=> length $a} keys %$rm) {
|
|
|
|
my $ls = svn_ls_current($dir, $rm, $add);
|
|
|
|
next if (scalar @$ls);
|
|
|
|
sys(qw(svn rm --force),$dir);
|
|
|
|
|
|
|
|
my $dn = dirname $dir;
|
|
|
|
$rm->{ $dn }->{ basename $dir } = 1;
|
|
|
|
$ls = svn_ls_current($dn, $rm, $add);
|
|
|
|
while (scalar @$ls == 0 && $dn ne File::Spec->curdir) {
|
|
|
|
sys(qw(svn rm --force),$dn);
|
|
|
|
$dir = basename $dn;
|
|
|
|
$dn = dirname $dn;
|
|
|
|
$rm->{ $dn }->{ $dir } = 1;
|
|
|
|
$ls = svn_ls_current($dn, $rm, $add);
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub get_commit_message {
|
|
|
|
my ($commit, $commit_msg) = (@_);
|
2006-02-26 18:22:27 +08:00
|
|
|
my %log_msg = ( msg => '' );
|
2006-03-03 17:20:07 +08:00
|
|
|
open my $msg, '>', $commit_msg or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
|
|
|
|
chomp(my $type = `git-cat-file -t $commit`);
|
2006-07-10 11:20:48 +08:00
|
|
|
if ($type eq 'commit' || $type eq 'tag') {
|
2006-02-16 17:24:16 +08:00
|
|
|
my $pid = open my $msg_fh, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
|
|
|
|
if ($pid == 0) {
|
2006-07-10 11:20:48 +08:00
|
|
|
exec('git-cat-file', $type, $commit) or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
my $in_msg = 0;
|
|
|
|
while (<$msg_fh>) {
|
|
|
|
if (!$in_msg) {
|
|
|
|
$in_msg = 1 if (/^\s*$/);
|
2006-03-03 17:20:08 +08:00
|
|
|
} elsif (/^git-svn-id: /) {
|
|
|
|
# skip this, we regenerate the correct one
|
|
|
|
# on re-fetch anyways
|
2006-02-16 17:24:16 +08:00
|
|
|
} else {
|
|
|
|
print $msg $_ or croak $!;
|
|
|
|
}
|
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
close $msg_fh or croak $?;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
close $msg or croak $!;
|
|
|
|
|
|
|
|
if ($_edit || ($type eq 'tree')) {
|
|
|
|
my $editor = $ENV{VISUAL} || $ENV{EDITOR} || 'vi';
|
|
|
|
system($editor, $commit_msg);
|
|
|
|
}
|
2006-03-03 17:20:07 +08:00
|
|
|
|
|
|
|
# file_to_s removes all trailing newlines, so just use chomp() here:
|
|
|
|
open $msg, '<', $commit_msg or croak $!;
|
|
|
|
{ local $/; chomp($log_msg{msg} = <$msg>); }
|
|
|
|
close $msg or croak $!;
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return \%log_msg;
|
|
|
|
}
|
|
|
|
|
2006-06-28 10:39:12 +08:00
|
|
|
sub set_svn_commit_env {
|
|
|
|
if (defined $LC_ALL) {
|
|
|
|
$ENV{LC_ALL} = $LC_ALL;
|
|
|
|
} else {
|
|
|
|
delete $ENV{LC_ALL};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub svn_commit_tree {
|
2006-06-13 19:02:23 +08:00
|
|
|
my ($last, $commit) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $commit_msg = "$GIT_SVN_DIR/.svn-commit.tmp.$$";
|
|
|
|
my $log_msg = get_commit_message($commit, $commit_msg);
|
|
|
|
my ($oneline) = ($log_msg->{msg} =~ /([^\n\r]+)/);
|
2006-03-03 17:20:07 +08:00
|
|
|
print "Committing $commit: $oneline\n";
|
|
|
|
|
2006-06-28 10:39:12 +08:00
|
|
|
set_svn_commit_env();
|
2006-02-16 17:24:16 +08:00
|
|
|
my @ci_output = safe_qx(qw(svn commit -F),$commit_msg);
|
2006-06-03 06:16:41 +08:00
|
|
|
$ENV{LC_ALL} = 'C';
|
2006-02-16 17:24:16 +08:00
|
|
|
unlink $commit_msg;
|
2006-06-03 06:16:41 +08:00
|
|
|
my ($committed) = ($ci_output[$#ci_output] =~ /(\d+)/);
|
|
|
|
if (!defined $committed) {
|
|
|
|
my $out = join("\n",@ci_output);
|
|
|
|
print STDERR "W: Trouble parsing \`svn commit' output:\n\n",
|
|
|
|
$out, "\n\nAssuming English locale...";
|
|
|
|
($committed) = ($out =~ /^Committed revision \d+\./sm);
|
|
|
|
defined $committed or die " FAILED!\n",
|
2006-02-16 17:24:16 +08:00
|
|
|
"Commit output failed to parse committed revision!\n",
|
2006-06-03 06:16:41 +08:00
|
|
|
print STDERR " OK\n";
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
|
2006-02-26 18:22:27 +08:00
|
|
|
my @svn_up = qw(svn up);
|
2006-02-16 17:24:16 +08:00
|
|
|
push @svn_up, '--ignore-externals' unless $_no_ignore_ext;
|
2006-06-13 19:02:23 +08:00
|
|
|
if ($_optimize_commits && ($committed == ($last->{revision} + 1))) {
|
2006-06-03 06:16:41 +08:00
|
|
|
push @svn_up, "-r$committed";
|
2006-02-26 18:22:27 +08:00
|
|
|
sys(@svn_up);
|
|
|
|
my $info = svn_info('.');
|
|
|
|
my $date = $info->{'Last Changed Date'} or die "Missing date\n";
|
2006-06-03 06:16:41 +08:00
|
|
|
if ($info->{'Last Changed Rev'} != $committed) {
|
|
|
|
croak "$info->{'Last Changed Rev'} != $committed\n"
|
2006-02-26 18:22:27 +08:00
|
|
|
}
|
|
|
|
my ($Y,$m,$d,$H,$M,$S,$tz) = ($date =~
|
|
|
|
/(\d{4})\-(\d\d)\-(\d\d)\s
|
|
|
|
(\d\d)\:(\d\d)\:(\d\d)\s([\-\+]\d+)/x)
|
|
|
|
or croak "Failed to parse date: $date\n";
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$log_msg->{date} = "$tz $Y-$m-$d $H:$M:$S";
|
|
|
|
$log_msg->{author} = $info->{'Last Changed Author'};
|
|
|
|
$log_msg->{revision} = $committed;
|
|
|
|
$log_msg->{msg} .= "\n";
|
2006-06-13 19:02:23 +08:00
|
|
|
$log_msg->{parents} = [ $last->{commit} ];
|
|
|
|
$log_msg->{commit} = git_commit($log_msg, $commit);
|
|
|
|
return $log_msg;
|
2006-02-26 18:22:27 +08:00
|
|
|
}
|
|
|
|
# resync immediately
|
2006-06-13 19:02:23 +08:00
|
|
|
push @svn_up, "-r$last->{revision}";
|
2006-02-16 17:24:16 +08:00
|
|
|
sys(@svn_up);
|
2006-06-13 19:02:23 +08:00
|
|
|
return fetch("$committed=$commit");
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
sub rev_list_raw {
|
|
|
|
my (@args) = @_;
|
|
|
|
my $pid = open my $fh, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
exec(qw/git-rev-list --pretty=raw/, @args) or croak $!;
|
|
|
|
}
|
|
|
|
return { fh => $fh, t => { } };
|
|
|
|
}
|
|
|
|
|
|
|
|
sub next_rev_list_entry {
|
|
|
|
my $rl = shift;
|
|
|
|
my $fh = $rl->{fh};
|
|
|
|
my $x = $rl->{t};
|
|
|
|
while (<$fh>) {
|
|
|
|
if (/^commit ($sha1)$/o) {
|
|
|
|
if ($x->{c}) {
|
|
|
|
$rl->{t} = { c => $1 };
|
|
|
|
return $x;
|
|
|
|
} else {
|
|
|
|
$x->{c} = $1;
|
|
|
|
}
|
|
|
|
} elsif (/^parent ($sha1)$/o) {
|
|
|
|
$x->{p}->{$1} = 1;
|
|
|
|
} elsif (s/^ //) {
|
|
|
|
$x->{m} ||= '';
|
|
|
|
$x->{m} .= $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ($x != $rl->{t}) ? $x : undef;
|
|
|
|
}
|
|
|
|
|
2006-03-26 10:52:31 +08:00
|
|
|
# read the entire log into a temporary file (which is removed ASAP)
|
|
|
|
# and store the file handle + parser state
|
2006-02-16 17:24:16 +08:00
|
|
|
sub svn_log_raw {
|
|
|
|
my (@log_args) = @_;
|
2006-03-26 10:52:31 +08:00
|
|
|
my $log_fh = IO::File->new_tmpfile or croak $!;
|
|
|
|
my $pid = fork;
|
2006-02-16 17:24:16 +08:00
|
|
|
defined $pid or croak $!;
|
2006-03-26 10:52:31 +08:00
|
|
|
if (!$pid) {
|
|
|
|
open STDOUT, '>&', $log_fh or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
exec (qw(svn log), @log_args) or croak $!
|
|
|
|
}
|
2006-03-26 10:52:31 +08:00
|
|
|
waitpid $pid, 0;
|
2006-05-24 16:40:37 +08:00
|
|
|
croak $? if $?;
|
2006-03-26 10:52:31 +08:00
|
|
|
seek $log_fh, 0, 0 or croak $!;
|
|
|
|
return { state => 'sep', fh => $log_fh };
|
|
|
|
}
|
|
|
|
|
|
|
|
sub next_log_entry {
|
|
|
|
my $log = shift; # retval of svn_log_raw()
|
|
|
|
my $ret = undef;
|
|
|
|
my $fh = $log->{fh};
|
2006-02-16 17:24:16 +08:00
|
|
|
|
2006-03-26 10:52:31 +08:00
|
|
|
while (<$fh>) {
|
2006-02-16 17:24:16 +08:00
|
|
|
chomp;
|
|
|
|
if (/^\-{72}$/) {
|
2006-03-26 10:52:31 +08:00
|
|
|
if ($log->{state} eq 'msg') {
|
|
|
|
if ($ret->{lines}) {
|
|
|
|
$ret->{msg} .= $_."\n";
|
|
|
|
unless(--$ret->{lines}) {
|
|
|
|
$log->{state} = 'sep';
|
2006-02-21 02:57:28 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
croak "Log parse error at: $_\n",
|
2006-03-26 10:52:31 +08:00
|
|
|
$ret->{revision},
|
2006-02-21 02:57:28 +08:00
|
|
|
"\n";
|
|
|
|
}
|
|
|
|
next;
|
|
|
|
}
|
2006-03-26 10:52:31 +08:00
|
|
|
if ($log->{state} ne 'sep') {
|
2006-02-21 02:57:28 +08:00
|
|
|
croak "Log parse error at: $_\n",
|
2006-03-26 10:52:31 +08:00
|
|
|
"state: $log->{state}\n",
|
|
|
|
$ret->{revision},
|
2006-02-21 02:57:28 +08:00
|
|
|
"\n";
|
|
|
|
}
|
2006-03-26 10:52:31 +08:00
|
|
|
$log->{state} = 'rev';
|
2006-02-16 17:24:16 +08:00
|
|
|
|
|
|
|
# if we have an empty log message, put something there:
|
2006-03-26 10:52:31 +08:00
|
|
|
if ($ret) {
|
|
|
|
$ret->{msg} ||= "\n";
|
|
|
|
delete $ret->{lines};
|
|
|
|
return $ret;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
next;
|
|
|
|
}
|
2006-03-26 10:52:31 +08:00
|
|
|
if ($log->{state} eq 'rev' && s/^r(\d+)\s*\|\s*//) {
|
2006-02-16 17:24:16 +08:00
|
|
|
my $rev = $1;
|
2006-02-21 02:57:28 +08:00
|
|
|
my ($author, $date, $lines) = split(/\s*\|\s*/, $_, 3);
|
|
|
|
($lines) = ($lines =~ /(\d+)/);
|
2006-09-25 10:50:15 +08:00
|
|
|
$date = '1970-01-01 00:00:00 +0000'
|
|
|
|
if ($_ignore_nodate && $date eq '(no date)');
|
2006-02-16 17:24:16 +08:00
|
|
|
my ($Y,$m,$d,$H,$M,$S,$tz) = ($date =~
|
|
|
|
/(\d{4})\-(\d\d)\-(\d\d)\s
|
|
|
|
(\d\d)\:(\d\d)\:(\d\d)\s([\-\+]\d+)/x)
|
|
|
|
or croak "Failed to parse date: $date\n";
|
2006-03-26 10:52:31 +08:00
|
|
|
$ret = { revision => $rev,
|
2006-02-16 17:24:16 +08:00
|
|
|
date => "$tz $Y-$m-$d $H:$M:$S",
|
|
|
|
author => $author,
|
2006-02-21 02:57:28 +08:00
|
|
|
lines => $lines,
|
2006-03-26 10:52:31 +08:00
|
|
|
msg => '' };
|
2006-03-03 17:20:08 +08:00
|
|
|
if (defined $_authors && ! defined $users{$author}) {
|
|
|
|
die "Author: $author not defined in ",
|
|
|
|
"$_authors file\n";
|
|
|
|
}
|
2006-03-26 10:52:31 +08:00
|
|
|
$log->{state} = 'msg_start';
|
2006-02-16 17:24:16 +08:00
|
|
|
next;
|
|
|
|
}
|
|
|
|
# skip the first blank line of the message:
|
2006-03-26 10:52:31 +08:00
|
|
|
if ($log->{state} eq 'msg_start' && /^$/) {
|
|
|
|
$log->{state} = 'msg';
|
|
|
|
} elsif ($log->{state} eq 'msg') {
|
|
|
|
if ($ret->{lines}) {
|
|
|
|
$ret->{msg} .= $_."\n";
|
|
|
|
unless (--$ret->{lines}) {
|
|
|
|
$log->{state} = 'sep';
|
2006-02-21 02:57:28 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
croak "Log parse error at: $_\n",
|
2006-03-26 10:52:31 +08:00
|
|
|
$ret->{revision},"\n";
|
2006-02-21 02:57:28 +08:00
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
}
|
2006-03-26 10:52:31 +08:00
|
|
|
return $ret;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub svn_info {
|
|
|
|
my $url = shift || $SVN_URL;
|
|
|
|
|
|
|
|
my $pid = open my $info_fh, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
|
|
|
|
if ($pid == 0) {
|
|
|
|
exec(qw(svn info),$url) or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
my $ret = {};
|
|
|
|
# only single-lines seem to exist in svn info output
|
|
|
|
while (<$info_fh>) {
|
|
|
|
chomp $_;
|
2006-02-26 18:22:27 +08:00
|
|
|
if (m#^([^:]+)\s*:\s*(\S.*)$#) {
|
2006-02-16 17:24:16 +08:00
|
|
|
$ret->{$1} = $2;
|
|
|
|
push @{$ret->{-order}}, $1;
|
|
|
|
}
|
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
close $info_fh or croak $?;
|
2006-02-16 17:24:16 +08:00
|
|
|
return $ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub sys { system(@_) == 0 or croak $? }
|
|
|
|
|
2006-05-24 10:23:41 +08:00
|
|
|
sub do_update_index {
|
|
|
|
my ($z_cmd, $cmd, $no_text_base) = @_;
|
|
|
|
|
|
|
|
my $z = open my $p, '-|';
|
|
|
|
defined $z or croak $!;
|
|
|
|
unless ($z) { exec @$z_cmd or croak $! }
|
|
|
|
|
|
|
|
my $pid = open my $ui, '|-';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
unless ($pid) {
|
|
|
|
exec('git-update-index',"--$cmd",'-z','--stdin') or croak $!;
|
|
|
|
}
|
|
|
|
local $/ = "\0";
|
|
|
|
while (my $x = <$p>) {
|
|
|
|
chomp $x;
|
|
|
|
if (!$no_text_base && lstat $x && ! -l _ &&
|
2006-05-29 06:23:56 +08:00
|
|
|
svn_propget_base('svn:keywords', $x)) {
|
2006-05-24 10:23:41 +08:00
|
|
|
my $mode = -x _ ? 0755 : 0644;
|
|
|
|
my ($v,$d,$f) = File::Spec->splitpath($x);
|
|
|
|
my $tb = File::Spec->catfile($d, '.svn', 'tmp',
|
|
|
|
'text-base',"$f.svn-base");
|
|
|
|
$tb =~ s#^/##;
|
|
|
|
unless (-f $tb) {
|
|
|
|
$tb = File::Spec->catfile($d, '.svn',
|
|
|
|
'text-base',"$f.svn-base");
|
|
|
|
$tb =~ s#^/##;
|
|
|
|
}
|
2006-08-11 19:34:07 +08:00
|
|
|
my @s = stat($x);
|
2006-05-24 10:23:41 +08:00
|
|
|
unlink $x or croak $!;
|
2006-08-11 19:34:07 +08:00
|
|
|
copy($tb, $x);
|
2006-05-24 10:23:41 +08:00
|
|
|
chmod(($mode &~ umask), $x) or croak $!;
|
2006-08-11 19:34:07 +08:00
|
|
|
utime $s[8], $s[9], $x;
|
2006-05-24 10:23:41 +08:00
|
|
|
}
|
|
|
|
print $ui $x,"\0";
|
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
close $ui or croak $?;
|
2006-05-24 10:23:41 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub index_changes {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return if $_use_lib;
|
2006-06-15 12:24:03 +08:00
|
|
|
|
|
|
|
if (!-f "$GIT_SVN_DIR/info/exclude") {
|
|
|
|
open my $fd, '>>', "$GIT_SVN_DIR/info/exclude" or croak $!;
|
|
|
|
print $fd '.svn',"\n";
|
|
|
|
close $fd or croak $!;
|
|
|
|
}
|
2006-05-24 10:23:41 +08:00
|
|
|
my $no_text_base = shift;
|
|
|
|
do_update_index([qw/git-diff-files --name-only -z/],
|
|
|
|
'remove',
|
|
|
|
$no_text_base);
|
|
|
|
do_update_index([qw/git-ls-files -z --others/,
|
2006-05-24 16:22:07 +08:00
|
|
|
"--exclude-from=$GIT_SVN_DIR/info/exclude"],
|
2006-05-24 10:23:41 +08:00
|
|
|
'add',
|
|
|
|
$no_text_base);
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub s_to_file {
|
|
|
|
my ($str, $file, $mode) = @_;
|
|
|
|
open my $fd,'>',$file or croak $!;
|
|
|
|
print $fd $str,"\n" or croak $!;
|
|
|
|
close $fd or croak $!;
|
|
|
|
chmod ($mode &~ umask, $file) if (defined $mode);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub file_to_s {
|
|
|
|
my $file = shift;
|
|
|
|
open my $fd,'<',$file or croak "$!: file: $file\n";
|
|
|
|
local $/;
|
|
|
|
my $ret = <$fd>;
|
|
|
|
close $fd or croak $!;
|
|
|
|
$ret =~ s/\s*$//s;
|
|
|
|
return $ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub assert_revision_unknown {
|
2006-06-13 19:02:23 +08:00
|
|
|
my $r = shift;
|
|
|
|
if (my $c = revdb_get($REVDB, $r)) {
|
|
|
|
croak "$r = $c already exists! Why are we refetching it?";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-03-21 12:51:16 +08:00
|
|
|
sub trees_eq {
|
|
|
|
my ($x, $y) = @_;
|
|
|
|
my @x = safe_qx('git-cat-file','commit',$x);
|
|
|
|
my @y = safe_qx('git-cat-file','commit',$y);
|
|
|
|
if (($y[0] ne $x[0]) || $x[0] !~ /^tree $sha1\n$/
|
|
|
|
|| $y[0] !~ /^tree $sha1\n$/) {
|
|
|
|
print STDERR "Trees not equal: $y[0] != $x[0]\n";
|
|
|
|
return 0
|
|
|
|
}
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
sub git_commit {
|
|
|
|
my ($log_msg, @parents) = @_;
|
|
|
|
assert_revision_unknown($log_msg->{revision});
|
2006-03-03 17:20:07 +08:00
|
|
|
map_tree_joins() if (@_branch_from && !%tree_map);
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my (@tmp_parents, @exec_parents, %seen_parent);
|
|
|
|
if (my $lparents = $log_msg->{parents}) {
|
|
|
|
@tmp_parents = @$lparents
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
# commit parents can be conditionally bound to a particular
|
|
|
|
# svn revision via: "svn_revno=commit_sha1", filter them out here:
|
|
|
|
foreach my $p (@parents) {
|
|
|
|
next unless defined $p;
|
|
|
|
if ($p =~ /^(\d+)=($sha1_short)$/o) {
|
|
|
|
if ($1 == $log_msg->{revision}) {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
push @tmp_parents, $2;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
} else {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
push @tmp_parents, $p if $p =~ /$sha1_short/o;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $tree = $log_msg->{tree};
|
|
|
|
if (!defined $tree) {
|
|
|
|
my $index = set_index($GIT_SVN_INDEX);
|
2006-05-24 10:23:41 +08:00
|
|
|
index_changes();
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
chomp($tree = `git-write-tree`);
|
2006-05-24 16:40:37 +08:00
|
|
|
croak $? if $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
restore_index($index);
|
|
|
|
}
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
|
|
|
|
# just in case we clobber the existing ref, we still want that ref
|
|
|
|
# as our parent:
|
|
|
|
if (my $cur = eval { file_to_s("$GIT_DIR/refs/remotes/$GIT_SVN") }) {
|
|
|
|
push @tmp_parents, $cur;
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if (exists $tree_map{$tree}) {
|
2006-06-28 10:39:11 +08:00
|
|
|
foreach my $p (@{$tree_map{$tree}}) {
|
|
|
|
my $skip;
|
|
|
|
foreach (@tmp_parents) {
|
|
|
|
# see if a common parent is found
|
|
|
|
my $mb = eval {
|
|
|
|
safe_qx('git-merge-base', $_, $p)
|
|
|
|
};
|
|
|
|
next if ($@ || $?);
|
|
|
|
$skip = 1;
|
|
|
|
last;
|
|
|
|
}
|
|
|
|
next if $skip;
|
|
|
|
my ($url_p, $r_p, $uuid_p) = cmt_metadata($p);
|
|
|
|
next if (($SVN_UUID eq $uuid_p) &&
|
|
|
|
($log_msg->{revision} > $r_p));
|
|
|
|
next if (defined $url_p && defined $SVN_URL &&
|
|
|
|
($SVN_UUID eq $uuid_p) &&
|
|
|
|
($url_p eq $SVN_URL));
|
|
|
|
push @tmp_parents, $p;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
foreach (@tmp_parents) {
|
|
|
|
next if $seen_parent{$_};
|
|
|
|
$seen_parent{$_} = 1;
|
|
|
|
push @exec_parents, $_;
|
|
|
|
# MAXPARENT is defined to 16 in commit-tree.c:
|
|
|
|
last if @exec_parents > 16;
|
|
|
|
}
|
|
|
|
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
set_commit_env($log_msg);
|
|
|
|
my @exec = ('git-commit-tree', $tree);
|
|
|
|
push @exec, '-p', $_ foreach @exec_parents;
|
|
|
|
defined(my $pid = open3(my $msg_fh, my $out_fh, '>&STDERR', @exec))
|
|
|
|
or croak $!;
|
|
|
|
print $msg_fh $log_msg->{msg} or croak $!;
|
|
|
|
unless ($_no_metadata) {
|
|
|
|
print $msg_fh "\ngit-svn-id: $SVN_URL\@$log_msg->{revision}",
|
2006-03-03 17:20:09 +08:00
|
|
|
" $SVN_UUID\n" or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
$msg_fh->flush == 0 or croak $!;
|
|
|
|
close $msg_fh or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
chomp(my $commit = do { local $/; <$out_fh> });
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
close $out_fh or croak $!;
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
2006-02-16 17:24:16 +08:00
|
|
|
if ($commit !~ /^$sha1$/o) {
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
die "Failed to commit, invalid sha1: $commit\n";
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
sys('git-update-ref',"refs/remotes/$GIT_SVN",$commit);
|
2006-06-13 19:02:23 +08:00
|
|
|
revdb_set($REVDB, $log_msg->{revision}, $commit);
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
# this output is read via pipe, do not change:
|
2006-02-16 17:24:16 +08:00
|
|
|
print "r$log_msg->{revision} = $commit\n";
|
2006-06-16 03:50:12 +08:00
|
|
|
check_repack();
|
|
|
|
return $commit;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub check_repack {
|
2006-05-24 17:07:32 +08:00
|
|
|
if ($_repack && (--$_repack_nr == 0)) {
|
|
|
|
$_repack_nr = $_repack;
|
|
|
|
sys("git repack $_repack_flags");
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
|
2006-03-03 17:20:08 +08:00
|
|
|
sub set_commit_env {
|
2006-03-03 17:20:09 +08:00
|
|
|
my ($log_msg) = @_;
|
2006-03-03 17:20:08 +08:00
|
|
|
my $author = $log_msg->{author};
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if (!defined $author || length $author == 0) {
|
|
|
|
$author = '(no author)';
|
|
|
|
}
|
2006-03-03 17:20:08 +08:00
|
|
|
my ($name,$email) = defined $users{$author} ? @{$users{$author}}
|
2006-03-03 17:20:09 +08:00
|
|
|
: ($author,"$author\@$SVN_UUID");
|
2006-03-03 17:20:08 +08:00
|
|
|
$ENV{GIT_AUTHOR_NAME} = $ENV{GIT_COMMITTER_NAME} = $name;
|
|
|
|
$ENV{GIT_AUTHOR_EMAIL} = $ENV{GIT_COMMITTER_EMAIL} = $email;
|
|
|
|
$ENV{GIT_AUTHOR_DATE} = $ENV{GIT_COMMITTER_DATE} = $log_msg->{date};
|
|
|
|
}
|
|
|
|
|
2006-02-21 02:57:28 +08:00
|
|
|
sub apply_mod_line_blob {
|
|
|
|
my $m = shift;
|
|
|
|
if ($m->{mode_b} =~ /^120/) {
|
|
|
|
blob_to_symlink($m->{sha1_b}, $m->{file_b});
|
|
|
|
} else {
|
|
|
|
blob_to_file($m->{sha1_b}, $m->{file_b});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
sub blob_to_symlink {
|
|
|
|
my ($blob, $link) = @_;
|
|
|
|
defined $link or croak "\$link not defined!\n";
|
|
|
|
croak "Not a sha1: $blob\n" unless $blob =~ /^$sha1$/o;
|
2006-02-21 02:57:28 +08:00
|
|
|
if (-l $link || -f _) {
|
|
|
|
unlink $link or croak $!;
|
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
my $dest = `git-cat-file blob $blob`; # no newline, so no chomp
|
|
|
|
symlink $dest, $link or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub blob_to_file {
|
|
|
|
my ($blob, $file) = @_;
|
|
|
|
defined $file or croak "\$file not defined!\n";
|
|
|
|
croak "Not a sha1: $blob\n" unless $blob =~ /^$sha1$/o;
|
2006-02-21 02:57:28 +08:00
|
|
|
if (-l $file || -f _) {
|
|
|
|
unlink $file or croak $!;
|
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
open my $blob_fh, '>', $file or croak "$!: $file\n";
|
|
|
|
my $pid = fork;
|
|
|
|
defined $pid or croak $!;
|
|
|
|
|
|
|
|
if ($pid == 0) {
|
|
|
|
open STDOUT, '>&', $blob_fh or croak $!;
|
2006-05-24 16:40:37 +08:00
|
|
|
exec('git-cat-file','blob',$blob) or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
|
|
|
|
close $blob_fh or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub safe_qx {
|
|
|
|
my $pid = open my $child, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if ($pid == 0) {
|
2006-05-24 16:40:37 +08:00
|
|
|
exec(@_) or croak $!;
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
|
|
|
my @ret = (<$child>);
|
|
|
|
close $child or croak $?;
|
|
|
|
die $? if $?; # just in case close didn't error out
|
|
|
|
return wantarray ? @ret : join('',@ret);
|
|
|
|
}
|
|
|
|
|
2006-03-09 19:48:47 +08:00
|
|
|
sub svn_compat_check {
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
if ($_follow_parent) {
|
|
|
|
print STDERR 'E: --follow-parent functionality is only ',
|
|
|
|
"available when SVN libraries are used\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
2006-03-09 19:48:47 +08:00
|
|
|
my @co_help = safe_qx(qw(svn co -h));
|
|
|
|
unless (grep /ignore-externals/,@co_help) {
|
2006-02-16 17:24:16 +08:00
|
|
|
print STDERR "W: Installed svn version does not support ",
|
|
|
|
"--ignore-externals\n";
|
|
|
|
$_no_ignore_ext = 1;
|
|
|
|
}
|
2006-03-09 19:48:47 +08:00
|
|
|
if (grep /usage: checkout URL\[\@REV\]/,@co_help) {
|
|
|
|
$_svn_co_url_revs = 1;
|
|
|
|
}
|
2006-05-29 06:23:56 +08:00
|
|
|
if (grep /\[TARGET\[\@REV\]\.\.\.\]/, `svn propget -h`) {
|
|
|
|
$_svn_pg_peg_revs = 1;
|
|
|
|
}
|
2006-03-09 19:50:34 +08:00
|
|
|
|
|
|
|
# I really, really hope nobody hits this...
|
|
|
|
unless (grep /stop-on-copy/, (safe_qx(qw(svn log -h)))) {
|
|
|
|
print STDERR <<'';
|
|
|
|
W: The installed svn version does not support the --stop-on-copy flag in
|
|
|
|
the log command.
|
|
|
|
Lets hope the directory you're tracking is not a branch or tag
|
|
|
|
and was never moved within the repository...
|
|
|
|
|
|
|
|
$_no_stop_copy = 1;
|
|
|
|
}
|
2006-03-09 19:48:47 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
# *sigh*, new versions of svn won't honor -r<rev> without URL@<rev>,
|
|
|
|
# (and they won't honor URL@<rev> without -r<rev>, too!)
|
|
|
|
sub svn_cmd_checkout {
|
|
|
|
my ($url, $rev, $dir) = @_;
|
|
|
|
my @cmd = ('svn','co', "-r$rev");
|
|
|
|
push @cmd, '--ignore-externals' unless $_no_ignore_ext;
|
|
|
|
$url .= "\@$rev" if $_svn_co_url_revs;
|
|
|
|
sys(@cmd, $url, $dir);
|
2006-02-16 17:24:16 +08:00
|
|
|
}
|
2006-03-02 13:58:31 +08:00
|
|
|
|
|
|
|
sub check_upgrade_needed {
|
2006-06-16 03:50:12 +08:00
|
|
|
if (!-r $REVDB) {
|
2006-06-16 17:55:13 +08:00
|
|
|
-d $GIT_SVN_DIR or mkpath([$GIT_SVN_DIR]);
|
2006-06-16 03:50:12 +08:00
|
|
|
open my $fh, '>>',$REVDB or croak $!;
|
|
|
|
close $fh;
|
|
|
|
}
|
2006-03-02 13:58:31 +08:00
|
|
|
my $old = eval {
|
|
|
|
my $pid = open my $child, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if ($pid == 0) {
|
|
|
|
close STDERR;
|
2006-05-24 16:40:37 +08:00
|
|
|
exec('git-rev-parse',"$GIT_SVN-HEAD") or croak $!;
|
2006-03-02 13:58:31 +08:00
|
|
|
}
|
|
|
|
my @ret = (<$child>);
|
|
|
|
close $child or croak $?;
|
|
|
|
die $? if $?; # just in case close didn't error out
|
|
|
|
return wantarray ? @ret : join('',@ret);
|
|
|
|
};
|
|
|
|
return unless $old;
|
|
|
|
my $head = eval { safe_qx('git-rev-parse',"refs/remotes/$GIT_SVN") };
|
|
|
|
if ($@ || !$head) {
|
|
|
|
print STDERR "Please run: $0 rebuild --upgrade\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-03-03 17:20:07 +08:00
|
|
|
# fills %tree_map with a reverse mapping of trees to commits. Useful
|
|
|
|
# for finding parents to commit on.
|
|
|
|
sub map_tree_joins {
|
2006-04-28 18:51:16 +08:00
|
|
|
my %seen;
|
2006-03-03 17:20:07 +08:00
|
|
|
foreach my $br (@_branch_from) {
|
|
|
|
my $pid = open my $pipe, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if ($pid == 0) {
|
2006-04-28 18:51:16 +08:00
|
|
|
exec(qw(git-rev-list --topo-order --pretty=raw), $br)
|
2006-05-24 16:40:37 +08:00
|
|
|
or croak $!;
|
2006-03-03 17:20:07 +08:00
|
|
|
}
|
|
|
|
while (<$pipe>) {
|
|
|
|
if (/^commit ($sha1)$/o) {
|
|
|
|
my $commit = $1;
|
2006-04-28 18:51:16 +08:00
|
|
|
|
|
|
|
# if we've seen a commit,
|
|
|
|
# we've seen its parents
|
|
|
|
last if $seen{$commit};
|
2006-03-03 17:20:07 +08:00
|
|
|
my ($tree) = (<$pipe> =~ /^tree ($sha1)$/o);
|
|
|
|
unless (defined $tree) {
|
|
|
|
die "Failed to parse commit $commit\n";
|
|
|
|
}
|
|
|
|
push @{$tree_map{$tree}}, $commit;
|
2006-04-28 18:51:16 +08:00
|
|
|
$seen{$commit} = 1;
|
2006-03-03 17:20:07 +08:00
|
|
|
}
|
|
|
|
}
|
2006-04-28 18:51:16 +08:00
|
|
|
close $pipe; # we could be breaking the pipe early
|
2006-03-03 17:20:07 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-04-28 18:42:38 +08:00
|
|
|
sub load_all_refs {
|
|
|
|
if (@_branch_from) {
|
|
|
|
print STDERR '--branch|-b parameters are ignored when ',
|
|
|
|
"--branch-all-refs|-B is passed\n";
|
|
|
|
}
|
|
|
|
|
|
|
|
# don't worry about rev-list on non-commit objects/tags,
|
|
|
|
# it shouldn't blow up if a ref is a blob or tree...
|
|
|
|
chomp(@_branch_from = `git-rev-parse --symbolic --all`);
|
|
|
|
}
|
|
|
|
|
2006-03-03 17:20:08 +08:00
|
|
|
# '<svn username> = real-name <email address>' mapping based on git-svnimport:
|
|
|
|
sub load_authors {
|
|
|
|
open my $authors, '<', $_authors or die "Can't open $_authors $!\n";
|
|
|
|
while (<$authors>) {
|
|
|
|
chomp;
|
2006-09-25 11:04:55 +08:00
|
|
|
next unless /^(\S+?|\(no author\))\s*=\s*(.+?)\s*<(.+)>\s*$/;
|
2006-03-03 17:20:08 +08:00
|
|
|
my ($user, $name, $email) = ($1, $2, $3);
|
|
|
|
$users{$user} = [$name, $email];
|
|
|
|
}
|
|
|
|
close $authors or croak $!;
|
|
|
|
}
|
|
|
|
|
2006-06-01 17:35:44 +08:00
|
|
|
sub rload_authors {
|
|
|
|
open my $authors, '<', $_authors or die "Can't open $_authors $!\n";
|
|
|
|
while (<$authors>) {
|
|
|
|
chomp;
|
|
|
|
next unless /^(\S+?)\s*=\s*(.+?)\s*<(.+)>\s*$/;
|
|
|
|
my ($user, $name, $email) = ($1, $2, $3);
|
|
|
|
$rusers{"$name <$email>"} = $user;
|
|
|
|
}
|
|
|
|
close $authors or croak $!;
|
|
|
|
}
|
|
|
|
|
2006-05-29 06:23:56 +08:00
|
|
|
sub svn_propget_base {
|
|
|
|
my ($p, $f) = @_;
|
|
|
|
$f .= '@BASE' if $_svn_pg_peg_revs;
|
|
|
|
return safe_qx(qw/svn propget/, $p, $f);
|
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
sub git_svn_each {
|
|
|
|
my $sub = shift;
|
|
|
|
foreach (`git-rev-parse --symbolic --all`) {
|
|
|
|
next unless s#^refs/remotes/##;
|
|
|
|
chomp $_;
|
|
|
|
next unless -f "$GIT_DIR/svn/$_/info/url";
|
|
|
|
&$sub($_);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-06-13 19:02:23 +08:00
|
|
|
sub migrate_revdb {
|
|
|
|
git_svn_each(sub {
|
|
|
|
my $id = shift;
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
|
|
|
exit 0 if -r $REVDB;
|
|
|
|
print "Upgrading svn => git mapping...\n";
|
2006-06-16 17:55:13 +08:00
|
|
|
-d $GIT_SVN_DIR or mkpath([$GIT_SVN_DIR]);
|
2006-06-13 19:02:23 +08:00
|
|
|
open my $fh, '>>',$REVDB or croak $!;
|
|
|
|
close $fh;
|
|
|
|
rebuild();
|
|
|
|
print "Done upgrading. You may now delete the ",
|
|
|
|
"deprecated $GIT_SVN_DIR/revs directory\n";
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
});
|
|
|
|
}
|
|
|
|
|
2006-05-24 16:22:07 +08:00
|
|
|
sub migration_check {
|
2006-06-13 19:02:23 +08:00
|
|
|
migrate_revdb() unless (-e $REVDB);
|
2006-05-24 16:22:07 +08:00
|
|
|
return if (-d "$GIT_DIR/svn" || !-d $GIT_DIR);
|
|
|
|
print "Upgrading repository...\n";
|
|
|
|
unless (-d "$GIT_DIR/svn") {
|
|
|
|
mkdir "$GIT_DIR/svn" or croak $!;
|
|
|
|
}
|
|
|
|
print "Data from a previous version of git-svn exists, but\n\t",
|
|
|
|
"$GIT_SVN_DIR\n\t(required for this version ",
|
|
|
|
"($VERSION) of git-svn) does not.\n";
|
|
|
|
|
|
|
|
foreach my $x (`git-rev-parse --symbolic --all`) {
|
|
|
|
next unless $x =~ s#^refs/remotes/##;
|
|
|
|
chomp $x;
|
|
|
|
next unless -f "$GIT_DIR/$x/info/url";
|
|
|
|
my $u = eval { file_to_s("$GIT_DIR/$x/info/url") };
|
|
|
|
next unless $u;
|
|
|
|
my $dn = dirname("$GIT_DIR/svn/$x");
|
|
|
|
mkpath([$dn]) unless -d $dn;
|
|
|
|
rename "$GIT_DIR/$x", "$GIT_DIR/svn/$x" or croak "$!: $x";
|
|
|
|
}
|
2006-06-13 19:02:23 +08:00
|
|
|
migrate_revdb() if (-d $GIT_SVN_DIR && !-w $REVDB);
|
2006-05-24 16:22:07 +08:00
|
|
|
print "Done upgrading.\n";
|
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
sub find_rev_before {
|
2006-06-13 19:02:23 +08:00
|
|
|
my ($r, $id, $eq_ok) = @_;
|
|
|
|
my $f = "$GIT_DIR/svn/$id/.rev_db";
|
2006-06-16 03:50:12 +08:00
|
|
|
return (undef,undef) unless -r $f;
|
|
|
|
--$r unless $eq_ok;
|
2006-06-13 19:02:23 +08:00
|
|
|
while ($r > 0) {
|
|
|
|
if (my $c = revdb_get($f, $r)) {
|
|
|
|
return ($r, $c);
|
|
|
|
}
|
|
|
|
--$r;
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
return (undef, undef);
|
|
|
|
}
|
|
|
|
|
2006-05-24 16:40:37 +08:00
|
|
|
sub init_vars {
|
|
|
|
$GIT_SVN ||= $ENV{GIT_SVN_ID} || 'git-svn';
|
|
|
|
$GIT_SVN_DIR = "$GIT_DIR/svn/$GIT_SVN";
|
2006-06-13 19:02:23 +08:00
|
|
|
$REVDB = "$GIT_SVN_DIR/.rev_db";
|
2006-05-24 16:40:37 +08:00
|
|
|
$GIT_SVN_INDEX = "$GIT_SVN_DIR/index";
|
|
|
|
$SVN_URL = undef;
|
|
|
|
$SVN_WC = "$GIT_SVN_DIR/tree";
|
2006-06-28 10:39:11 +08:00
|
|
|
%tree_map = ();
|
2006-05-24 16:40:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
# convert GetOpt::Long specs for use by git-repo-config
|
|
|
|
sub read_repo_config {
|
|
|
|
return unless -d $GIT_DIR;
|
|
|
|
my $opts = shift;
|
|
|
|
foreach my $o (keys %$opts) {
|
|
|
|
my $v = $opts->{$o};
|
|
|
|
my ($key) = ($o =~ /^([a-z\-]+)/);
|
|
|
|
$key =~ s/-//g;
|
|
|
|
my $arg = 'git-repo-config';
|
|
|
|
$arg .= ' --int' if ($o =~ /[:=]i$/);
|
|
|
|
$arg .= ' --bool' if ($o !~ /[:=][sfi]$/);
|
|
|
|
if (ref $v eq 'ARRAY') {
|
|
|
|
chomp(my @tmp = `$arg --get-all svn.$key`);
|
|
|
|
@$v = @tmp if @tmp;
|
|
|
|
} else {
|
|
|
|
chomp(my $tmp = `$arg --get svn.$key`);
|
|
|
|
if ($tmp && !($arg =~ / --bool / && $tmp eq 'false')) {
|
|
|
|
$$v = $tmp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-05-24 17:07:32 +08:00
|
|
|
sub set_default_vals {
|
|
|
|
if (defined $_repack) {
|
|
|
|
$_repack = 1000 if ($_repack <= 0);
|
|
|
|
$_repack_nr = $_repack;
|
2006-06-16 03:50:12 +08:00
|
|
|
$_repack_flags ||= '-d';
|
2006-05-24 17:07:32 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
sub read_grafts {
|
|
|
|
my $gr_file = shift;
|
|
|
|
my ($grafts, $comments) = ({}, {});
|
|
|
|
if (open my $fh, '<', $gr_file) {
|
|
|
|
my @tmp;
|
|
|
|
while (<$fh>) {
|
|
|
|
if (/^($sha1)\s+/) {
|
|
|
|
my $c = $1;
|
|
|
|
if (@tmp) {
|
|
|
|
@{$comments->{$c}} = @tmp;
|
|
|
|
@tmp = ();
|
|
|
|
}
|
|
|
|
foreach my $p (split /\s+/, $_) {
|
|
|
|
$grafts->{$c}->{$p} = 1;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
push @tmp, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
close $fh or croak $!;
|
|
|
|
@{$comments->{'END'}} = @tmp if @tmp;
|
|
|
|
}
|
|
|
|
return ($grafts, $comments);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub write_grafts {
|
|
|
|
my ($grafts, $comments, $gr_file) = @_;
|
|
|
|
|
|
|
|
open my $fh, '>', $gr_file or croak $!;
|
|
|
|
foreach my $c (sort keys %$grafts) {
|
|
|
|
if ($comments->{$c}) {
|
|
|
|
print $fh $_ foreach @{$comments->{$c}};
|
|
|
|
}
|
|
|
|
my $p = $grafts->{$c};
|
2006-06-28 10:39:11 +08:00
|
|
|
my %x; # real parents
|
2006-06-13 06:53:13 +08:00
|
|
|
delete $p->{$c}; # commits are not self-reproducing...
|
|
|
|
my $pid = open my $ch, '-|';
|
|
|
|
defined $pid or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
exec(qw/git-cat-file commit/, $c) or croak $!;
|
|
|
|
}
|
|
|
|
while (<$ch>) {
|
2006-06-28 10:39:11 +08:00
|
|
|
if (/^parent ($sha1)/) {
|
|
|
|
$x{$1} = $p->{$1} = 1;
|
2006-06-13 06:53:13 +08:00
|
|
|
} else {
|
2006-06-28 10:39:11 +08:00
|
|
|
last unless /^\S/;
|
2006-06-13 06:53:13 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
close $ch; # breaking the pipe
|
2006-06-28 10:39:11 +08:00
|
|
|
|
|
|
|
# if real parents are the only ones in the grafts, drop it
|
|
|
|
next if join(' ',sort keys %$p) eq join(' ',sort keys %x);
|
|
|
|
|
|
|
|
my (@ip, @jp, $mb);
|
|
|
|
my %del = %x;
|
|
|
|
@ip = @jp = keys %$p;
|
|
|
|
foreach my $i (@ip) {
|
|
|
|
next if $del{$i} || $p->{$i} == 2;
|
|
|
|
foreach my $j (@jp) {
|
|
|
|
next if $i eq $j || $del{$j} || $p->{$j} == 2;
|
|
|
|
$mb = eval { safe_qx('git-merge-base',$i,$j) };
|
|
|
|
next unless $mb;
|
|
|
|
chomp $mb;
|
|
|
|
next if $x{$mb};
|
|
|
|
if ($mb eq $j) {
|
|
|
|
delete $p->{$i};
|
|
|
|
$del{$i} = 1;
|
|
|
|
} elsif ($mb eq $i) {
|
|
|
|
delete $p->{$j};
|
|
|
|
$del{$j} = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
# if real parents are the only ones in the grafts, drop it
|
|
|
|
next if join(' ',sort keys %$p) eq join(' ',sort keys %x);
|
|
|
|
|
2006-06-13 06:53:13 +08:00
|
|
|
print $fh $c, ' ', join(' ', sort keys %$p),"\n";
|
|
|
|
}
|
|
|
|
if ($comments->{'END'}) {
|
|
|
|
print $fh $_ foreach @{$comments->{'END'}};
|
|
|
|
}
|
|
|
|
close $fh or croak $!;
|
|
|
|
}
|
|
|
|
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
sub read_url_paths_all {
|
|
|
|
my ($l_map, $pfx, $p) = @_;
|
|
|
|
my @dir;
|
|
|
|
foreach (<$p/*>) {
|
|
|
|
if (-r "$_/info/url") {
|
|
|
|
$pfx .= '/' if $pfx && $pfx !~ m!/$!;
|
|
|
|
my $id = $pfx . basename $_;
|
|
|
|
my $url = file_to_s("$_/info/url");
|
|
|
|
my ($u, $p) = repo_path_split($url);
|
|
|
|
$l_map->{$u}->{$p} = $id;
|
|
|
|
} elsif (-d $_) {
|
|
|
|
push @dir, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (@dir) {
|
|
|
|
my $x = $_;
|
|
|
|
$x =~ s!^\Q$GIT_DIR\E/svn/!!o;
|
|
|
|
read_url_paths_all($l_map, $x, $_);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
# this one only gets ids that have been imported, not new ones
|
2006-06-13 06:53:13 +08:00
|
|
|
sub read_url_paths {
|
|
|
|
my $l_map = {};
|
|
|
|
git_svn_each(sub { my $x = shift;
|
2006-06-15 12:24:03 +08:00
|
|
|
my $url = file_to_s("$GIT_DIR/svn/$x/info/url");
|
|
|
|
my ($u, $p) = repo_path_split($url);
|
2006-06-13 06:53:13 +08:00
|
|
|
$l_map->{$u}->{$p} = $x;
|
|
|
|
});
|
|
|
|
return $l_map;
|
|
|
|
}
|
|
|
|
|
2006-06-01 17:35:44 +08:00
|
|
|
sub extract_metadata {
|
2006-06-28 10:39:11 +08:00
|
|
|
my $id = shift or return (undef, undef, undef);
|
2006-06-01 17:35:44 +08:00
|
|
|
my ($url, $rev, $uuid) = ($id =~ /^git-svn-id:\s(\S+?)\@(\d+)
|
|
|
|
\s([a-f\d\-]+)$/x);
|
2006-11-24 06:54:04 +08:00
|
|
|
if (!defined $rev || !$uuid || !$url) {
|
2006-06-01 17:35:44 +08:00
|
|
|
# some of the original repositories I made had
|
2006-07-10 13:50:18 +08:00
|
|
|
# identifiers like this:
|
2006-06-01 17:35:44 +08:00
|
|
|
($rev, $uuid) = ($id =~/^git-svn-id:\s(\d+)\@([a-f\d\-]+)/);
|
|
|
|
}
|
|
|
|
return ($url, $rev, $uuid);
|
|
|
|
}
|
|
|
|
|
2006-06-28 10:39:11 +08:00
|
|
|
sub cmt_metadata {
|
|
|
|
return extract_metadata((grep(/^git-svn-id: /,
|
|
|
|
safe_qx(qw/git-cat-file commit/, shift)))[-1]);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub get_commit_time {
|
|
|
|
my $cmt = shift;
|
|
|
|
defined(my $pid = open my $fh, '-|') or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
exec qw/git-rev-list --pretty=raw -n1/, $cmt or croak $!;
|
|
|
|
}
|
|
|
|
while (<$fh>) {
|
|
|
|
/^committer\s(?:.+) (\d+) ([\-\+]?\d+)$/ or next;
|
|
|
|
my ($s, $tz) = ($1, $2);
|
|
|
|
if ($tz =~ s/^\+//) {
|
|
|
|
$s += tz_to_s_offset($tz);
|
|
|
|
} elsif ($tz =~ s/^\-//) {
|
|
|
|
$s -= tz_to_s_offset($tz);
|
|
|
|
}
|
|
|
|
close $fh;
|
|
|
|
return $s;
|
|
|
|
}
|
|
|
|
die "Can't get commit time for commit: $cmt\n";
|
|
|
|
}
|
|
|
|
|
2006-06-01 17:35:44 +08:00
|
|
|
sub tz_to_s_offset {
|
|
|
|
my ($tz) = @_;
|
|
|
|
$tz =~ s/(\d\d)$//;
|
|
|
|
return ($1 * 60) + ($tz * 3600);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub setup_pager { # translated to Perl from pager.c
|
|
|
|
return unless (-t *STDOUT);
|
|
|
|
my $pager = $ENV{PAGER};
|
|
|
|
if (!defined $pager) {
|
|
|
|
$pager = 'less';
|
|
|
|
} elsif (length $pager == 0 || $pager eq 'cat') {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
pipe my $rfd, my $wfd or return;
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
open STDOUT, '>&', $wfd or croak $!;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
open STDIN, '<&', $rfd or croak $!;
|
|
|
|
$ENV{LESS} ||= '-S';
|
|
|
|
exec $pager or croak "Can't run pager: $!\n";;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub get_author_info {
|
|
|
|
my ($dest, $author, $t, $tz) = @_;
|
|
|
|
$author =~ s/(?:^\s*|\s*$)//g;
|
2006-06-16 09:48:22 +08:00
|
|
|
$dest->{a_raw} = $author;
|
2006-06-01 17:35:44 +08:00
|
|
|
my $_a;
|
|
|
|
if ($_authors) {
|
|
|
|
$_a = $rusers{$author} || undef;
|
|
|
|
}
|
|
|
|
if (!$_a) {
|
|
|
|
($_a) = ($author =~ /<([^>]+)\@[^>]+>$/);
|
|
|
|
}
|
|
|
|
$dest->{t} = $t;
|
|
|
|
$dest->{tz} = $tz;
|
|
|
|
$dest->{a} = $_a;
|
|
|
|
# Date::Parse isn't in the standard Perl distro :(
|
|
|
|
if ($tz =~ s/^\+//) {
|
|
|
|
$t += tz_to_s_offset($tz);
|
|
|
|
} elsif ($tz =~ s/^\-//) {
|
|
|
|
$t -= tz_to_s_offset($tz);
|
|
|
|
}
|
|
|
|
$dest->{t_utc} = $t;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub process_commit {
|
|
|
|
my ($c, $r_min, $r_max, $defer) = @_;
|
|
|
|
if (defined $r_min && defined $r_max) {
|
|
|
|
if ($r_min == $c->{r} && $r_min == $r_max) {
|
|
|
|
show_commit($c);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
return 1 if $r_min == $r_max;
|
|
|
|
if ($r_min < $r_max) {
|
|
|
|
# we need to reverse the print order
|
|
|
|
return 0 if (defined $_limit && --$_limit < 0);
|
|
|
|
push @$defer, $c;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
if ($r_min != $r_max) {
|
|
|
|
return 1 if ($r_min < $c->{r});
|
|
|
|
return 1 if ($r_max > $c->{r});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0 if (defined $_limit && --$_limit < 0);
|
|
|
|
show_commit($c);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub show_commit {
|
|
|
|
my $c = shift;
|
|
|
|
if ($_oneline) {
|
|
|
|
my $x = "\n";
|
|
|
|
if (my $l = $c->{l}) {
|
|
|
|
while ($l->[0] =~ /^\s*$/) { shift @$l }
|
|
|
|
$x = $l->[0];
|
|
|
|
}
|
|
|
|
$_l_fmt ||= 'A' . length($c->{r});
|
|
|
|
print 'r',pack($_l_fmt, $c->{r}),' | ';
|
|
|
|
print "$c->{c} | " if $_show_commit;
|
|
|
|
print $x;
|
|
|
|
} else {
|
|
|
|
show_commit_normal($c);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-10-12 02:53:22 +08:00
|
|
|
sub show_commit_changed_paths {
|
|
|
|
my ($c) = @_;
|
|
|
|
return unless $c->{changed};
|
|
|
|
print "Changed paths:\n", @{$c->{changed}};
|
|
|
|
}
|
|
|
|
|
2006-06-01 17:35:44 +08:00
|
|
|
sub show_commit_normal {
|
|
|
|
my ($c) = @_;
|
|
|
|
print '-' x72, "\nr$c->{r} | ";
|
|
|
|
print "$c->{c} | " if $_show_commit;
|
|
|
|
print "$c->{a} | ", strftime("%Y-%m-%d %H:%M:%S %z (%a, %d %b %Y)",
|
|
|
|
localtime($c->{t_utc})), ' | ';
|
|
|
|
my $nr_line = 0;
|
|
|
|
|
|
|
|
if (my $l = $c->{l}) {
|
2006-10-12 02:53:22 +08:00
|
|
|
while ($l->[$#$l] eq "\n" && $#$l > 0
|
|
|
|
&& $l->[($#$l - 1)] eq "\n") {
|
2006-06-01 17:35:44 +08:00
|
|
|
pop @$l;
|
|
|
|
}
|
|
|
|
$nr_line = scalar @$l;
|
|
|
|
if (!$nr_line) {
|
|
|
|
print "1 line\n\n\n";
|
|
|
|
} else {
|
|
|
|
if ($nr_line == 1) {
|
|
|
|
$nr_line = '1 line';
|
|
|
|
} else {
|
|
|
|
$nr_line .= ' lines';
|
|
|
|
}
|
2006-10-12 02:53:22 +08:00
|
|
|
print $nr_line, "\n";
|
|
|
|
show_commit_changed_paths($c);
|
|
|
|
print "\n";
|
2006-06-01 17:35:44 +08:00
|
|
|
print $_ foreach @$l;
|
|
|
|
}
|
|
|
|
} else {
|
2006-10-12 02:53:22 +08:00
|
|
|
print "1 line\n";
|
|
|
|
show_commit_changed_paths($c);
|
|
|
|
print "\n";
|
2006-06-01 17:35:44 +08:00
|
|
|
|
|
|
|
}
|
|
|
|
foreach my $x (qw/raw diff/) {
|
|
|
|
if ($c->{$x}) {
|
|
|
|
print "\n";
|
|
|
|
print $_ foreach @{$c->{$x}}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub libsvn_load {
|
|
|
|
return unless $_use_lib;
|
|
|
|
$_use_lib = eval {
|
|
|
|
require SVN::Core;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
if ($SVN::Core::VERSION lt '1.1.0') {
|
|
|
|
die "Need SVN::Core 1.1.0 or better ",
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
"(got $SVN::Core::VERSION) ",
|
|
|
|
"Falling back to command-line svn\n";
|
|
|
|
}
|
|
|
|
require SVN::Ra;
|
|
|
|
require SVN::Delta;
|
|
|
|
push @SVN::Git::Editor::ISA, 'SVN::Delta::Editor';
|
|
|
|
my $kill_stupid_warnings = $SVN::Node::none.$SVN::Node::file.
|
|
|
|
$SVN::Node::dir.$SVN::Node::unknown.
|
|
|
|
$SVN::Node::none.$SVN::Node::file.
|
2006-11-24 17:38:04 +08:00
|
|
|
$SVN::Node::dir.$SVN::Node::unknown.
|
|
|
|
$SVN::Auth::SSL::CNMISMATCH.
|
|
|
|
$SVN::Auth::SSL::NOTYETVALID.
|
|
|
|
$SVN::Auth::SSL::EXPIRED.
|
|
|
|
$SVN::Auth::SSL::UNKNOWNCA.
|
|
|
|
$SVN::Auth::SSL::OTHER;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
1;
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2006-11-24 17:38:04 +08:00
|
|
|
sub _simple_prompt {
|
|
|
|
my ($cred, $realm, $default_username, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
$default_username = $_username if defined $_username;
|
|
|
|
if (defined $default_username && length $default_username) {
|
|
|
|
if (defined $realm && length $realm) {
|
|
|
|
print "Authentication realm: $realm\n";
|
|
|
|
}
|
|
|
|
$cred->username($default_username);
|
|
|
|
} else {
|
|
|
|
_username_prompt($cred, $realm, $may_save, $pool);
|
|
|
|
}
|
|
|
|
$cred->password(_read_password("Password for '" .
|
|
|
|
$cred->username . "': ", $realm));
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _ssl_server_trust_prompt {
|
|
|
|
my ($cred, $realm, $failures, $cert_info, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
print "Error validating server certificate for '$realm':\n";
|
|
|
|
if ($failures & $SVN::Auth::SSL::UNKNOWNCA) {
|
|
|
|
print " - The certificate is not issued by a trusted ",
|
|
|
|
"authority. Use the\n",
|
|
|
|
" fingerprint to validate the certificate manually!\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::CNMISMATCH) {
|
|
|
|
print " - The certificate hostname does not match.\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::NOTYETVALID) {
|
|
|
|
print " - The certificate is not yet valid.\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::EXPIRED) {
|
|
|
|
print " - The certificate has expired.\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::OTHER) {
|
|
|
|
print " - The certificate has an unknown error.\n";
|
|
|
|
}
|
|
|
|
printf( "Certificate information:\n".
|
|
|
|
" - Hostname: %s\n".
|
|
|
|
" - Valid: from %s until %s\n".
|
|
|
|
" - Issuer: %s\n".
|
|
|
|
" - Fingerprint: %s\n",
|
|
|
|
map $cert_info->$_, qw(hostname valid_from valid_until
|
|
|
|
issuer_dname fingerprint) );
|
|
|
|
my $choice;
|
|
|
|
prompt:
|
|
|
|
print $may_save ?
|
|
|
|
"(R)eject, accept (t)emporarily or accept (p)ermanently? " :
|
|
|
|
"(R)eject or accept (t)emporarily? ";
|
|
|
|
$choice = lc(substr(<STDIN> || 'R', 0, 1));
|
|
|
|
if ($choice =~ /^t$/i) {
|
|
|
|
$cred->may_save(undef);
|
|
|
|
} elsif ($choice =~ /^r$/i) {
|
|
|
|
return -1;
|
|
|
|
} elsif ($may_save && $choice =~ /^p$/i) {
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
} else {
|
|
|
|
goto prompt;
|
|
|
|
}
|
|
|
|
$cred->accepted_failures($failures);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _ssl_client_cert_prompt {
|
|
|
|
my ($cred, $realm, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
print "Client certificate filename: ";
|
|
|
|
chomp(my $filename = <STDIN>);
|
|
|
|
$cred->cert_file($filename);
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _ssl_client_cert_pw_prompt {
|
|
|
|
my ($cred, $realm, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
$cred->password(_read_password("Password: ", $realm));
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _username_prompt {
|
|
|
|
my ($cred, $realm, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
if (defined $realm && length $realm) {
|
|
|
|
print "Authentication realm: $realm\n";
|
|
|
|
}
|
|
|
|
my $username;
|
|
|
|
if (defined $_username) {
|
|
|
|
$username = $_username;
|
|
|
|
} else {
|
|
|
|
print "Username: ";
|
|
|
|
chomp($username = <STDIN>);
|
|
|
|
}
|
|
|
|
$cred->username($username);
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _read_password {
|
|
|
|
my ($prompt, $realm) = @_;
|
|
|
|
print $prompt;
|
|
|
|
require Term::ReadKey;
|
|
|
|
Term::ReadKey::ReadMode('noecho');
|
|
|
|
my $password = '';
|
|
|
|
while (defined(my $key = Term::ReadKey::ReadKey(0))) {
|
|
|
|
last if $key =~ /[\012\015]/; # \n\r
|
|
|
|
$password .= $key;
|
|
|
|
}
|
|
|
|
Term::ReadKey::ReadMode('restore');
|
|
|
|
print "\n";
|
|
|
|
$password;
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
sub libsvn_connect {
|
|
|
|
my ($url) = @_;
|
2006-11-24 17:38:04 +08:00
|
|
|
if (!$AUTH_BATON || !$AUTH_CALLBACKS) {
|
|
|
|
SVN::_Core::svn_config_ensure($_config_dir, undef);
|
|
|
|
($AUTH_BATON, $AUTH_CALLBACKS) = SVN::Core::auth_open_helper([
|
|
|
|
SVN::Client::get_simple_provider(),
|
|
|
|
SVN::Client::get_ssl_server_trust_file_provider(),
|
|
|
|
SVN::Client::get_simple_prompt_provider(
|
|
|
|
\&_simple_prompt, 2),
|
|
|
|
SVN::Client::get_ssl_client_cert_prompt_provider(
|
|
|
|
\&_ssl_client_cert_prompt, 2),
|
|
|
|
SVN::Client::get_ssl_client_cert_pw_prompt_provider(
|
|
|
|
\&_ssl_client_cert_pw_prompt, 2),
|
|
|
|
SVN::Client::get_username_provider(),
|
|
|
|
SVN::Client::get_ssl_server_trust_prompt_provider(
|
|
|
|
\&_ssl_server_trust_prompt),
|
|
|
|
SVN::Client::get_username_prompt_provider(
|
|
|
|
\&_username_prompt, 2),
|
|
|
|
]);
|
|
|
|
}
|
|
|
|
SVN::Ra->new(url => $url, auth => $AUTH_BATON,
|
|
|
|
auth_provider_callbacks => $AUTH_CALLBACKS);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_get_file {
|
2006-11-05 13:51:10 +08:00
|
|
|
my ($gui, $f, $rev, $chg) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $p = $f;
|
2006-08-12 02:11:29 +08:00
|
|
|
if (length $SVN_PATH > 0) {
|
|
|
|
return unless ($p =~ s#^\Q$SVN_PATH\E/##);
|
|
|
|
}
|
2006-11-05 13:51:10 +08:00
|
|
|
print "\t$chg\t$f\n" unless $_q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
|
2006-06-16 04:36:12 +08:00
|
|
|
my ($hash, $pid, $in, $out);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $pool = SVN::Pool->new;
|
2006-06-16 04:36:12 +08:00
|
|
|
defined($pid = open3($in, $out, '>&STDERR',
|
|
|
|
qw/git-hash-object -w --stdin/)) or croak $!;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
# redirect STDOUT for SVN 1.1.x compatibility
|
|
|
|
open my $stdout, '>&', \*STDOUT or croak $!;
|
|
|
|
open STDOUT, '>&', $in or croak $!;
|
|
|
|
my ($r, $props) = $SVN->get_file($f, $rev, \*STDOUT, $pool);
|
2006-06-16 04:36:12 +08:00
|
|
|
$in->flush == 0 or croak $!;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
open STDOUT, '>&', $stdout or croak $!;
|
2006-06-16 04:36:12 +08:00
|
|
|
close $in or croak $!;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
close $stdout or croak $!;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$pool->clear;
|
2006-06-16 04:36:12 +08:00
|
|
|
chomp($hash = do { local $/; <$out> });
|
|
|
|
close $out or croak $!;
|
|
|
|
waitpid $pid, 0;
|
|
|
|
$hash =~ /^$sha1$/o or die "not a sha1: $hash\n";
|
|
|
|
|
|
|
|
my $mode = exists $props->{'svn:executable'} ? '100755' : '100644';
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
if (exists $props->{'svn:special'}) {
|
|
|
|
$mode = '120000';
|
2006-06-16 04:36:12 +08:00
|
|
|
my $link = `git-cat-file blob $hash`;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$link =~ s/^link // or die "svn:special file with contents: <",
|
|
|
|
$link, "> is not understood\n";
|
2006-06-16 04:36:12 +08:00
|
|
|
defined($pid = open3($in, $out, '>&STDERR',
|
|
|
|
qw/git-hash-object -w --stdin/)) or croak $!;
|
|
|
|
print $in $link;
|
|
|
|
$in->flush == 0 or croak $!;
|
|
|
|
close $in or croak $!;
|
|
|
|
chomp($hash = do { local $/; <$out> });
|
|
|
|
close $out or croak $!;
|
|
|
|
waitpid $pid, 0;
|
|
|
|
$hash =~ /^$sha1$/o or die "not a sha1: $hash\n";
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
print $gui $mode,' ',$hash,"\t",$p,"\0" or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_log_entry {
|
|
|
|
my ($rev, $author, $date, $msg, $parents) = @_;
|
|
|
|
my ($Y,$m,$d,$H,$M,$S) = ($date =~ /^(\d{4})\-(\d\d)\-(\d\d)T
|
|
|
|
(\d\d)\:(\d\d)\:(\d\d).\d+Z$/x)
|
|
|
|
or die "Unable to parse date: $date\n";
|
|
|
|
if (defined $_authors && ! defined $users{$author}) {
|
|
|
|
die "Author: $author not defined in $_authors file\n";
|
|
|
|
}
|
2006-08-12 02:11:29 +08:00
|
|
|
$msg = '' if ($rev == 0 && !defined $msg);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return { revision => $rev, date => "+0000 $Y-$m-$d $H:$M:$S",
|
|
|
|
author => $author, msg => $msg."\n", parents => $parents || [] }
|
|
|
|
}
|
|
|
|
|
|
|
|
sub process_rm {
|
|
|
|
my ($gui, $last_commit, $f) = @_;
|
|
|
|
$f =~ s#^\Q$SVN_PATH\E/?## or return;
|
|
|
|
# remove entire directories.
|
|
|
|
if (safe_qx('git-ls-tree',$last_commit,'--',$f) =~ /^040000 tree/) {
|
|
|
|
defined(my $pid = open my $ls, '-|') or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
exec(qw/git-ls-tree -r --name-only -z/,
|
|
|
|
$last_commit,'--',$f) or croak $!;
|
|
|
|
}
|
|
|
|
local $/ = "\0";
|
|
|
|
while (<$ls>) {
|
|
|
|
print $gui '0 ',0 x 40,"\t",$_ or croak $!;
|
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
close $ls or croak $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
} else {
|
|
|
|
print $gui '0 ',0 x 40,"\t",$f,"\0" or croak $!;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_fetch {
|
|
|
|
my ($last_commit, $paths, $rev, $author, $date, $msg) = @_;
|
|
|
|
open my $gui, '| git-update-index -z --index-info' or croak $!;
|
|
|
|
my @amr;
|
|
|
|
foreach my $f (keys %$paths) {
|
|
|
|
my $m = $paths->{$f}->action();
|
|
|
|
$f =~ s#^/+##;
|
|
|
|
if ($m =~ /^[DR]$/) {
|
2006-06-28 10:39:14 +08:00
|
|
|
print "\t$m\t$f\n" unless $_q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
process_rm($gui, $last_commit, $f);
|
|
|
|
next if $m eq 'D';
|
|
|
|
# 'R' can be file replacements, too, right?
|
|
|
|
}
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my $t = $SVN->check_path($f, $rev, $pool);
|
|
|
|
if ($t == $SVN::Node::file) {
|
|
|
|
if ($m =~ /^[AMR]$/) {
|
2006-06-28 10:39:14 +08:00
|
|
|
push @amr, [ $m, $f ];
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
} else {
|
|
|
|
die "Unrecognized action: $m, ($f r$rev)\n";
|
|
|
|
}
|
2006-07-20 16:43:01 +08:00
|
|
|
} elsif ($t == $SVN::Node::dir && $m =~ /^[AR]$/) {
|
|
|
|
my @traversed = ();
|
|
|
|
libsvn_traverse($gui, '', $f, $rev, \@traversed);
|
|
|
|
foreach (@traversed) {
|
|
|
|
push @amr, [ $m, $_ ]
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
}
|
2006-06-28 10:39:14 +08:00
|
|
|
foreach (@amr) {
|
2006-11-05 13:51:10 +08:00
|
|
|
libsvn_get_file($gui, $_->[1], $rev, $_->[0]);
|
2006-06-28 10:39:14 +08:00
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
close $gui or croak $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return libsvn_log_entry($rev, $author, $date, $msg, [$last_commit]);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub svn_grab_base_rev {
|
|
|
|
defined(my $pid = open my $fh, '-|') or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
open my $null, '>', '/dev/null' or croak $!;
|
|
|
|
open STDERR, '>&', $null or croak $!;
|
|
|
|
exec qw/git-rev-parse --verify/,"refs/remotes/$GIT_SVN^0"
|
|
|
|
or croak $!;
|
|
|
|
}
|
|
|
|
chomp(my $c = do { local $/; <$fh> });
|
|
|
|
close $fh;
|
|
|
|
if (defined $c && length $c) {
|
2006-06-28 10:39:11 +08:00
|
|
|
my ($url, $rev, $uuid) = cmt_metadata($c);
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
return ($rev, $c) if defined $rev;
|
|
|
|
}
|
|
|
|
if ($_no_metadata) {
|
|
|
|
my $offset = -41; # from tail
|
|
|
|
my $rl;
|
|
|
|
open my $fh, '<', $REVDB or
|
|
|
|
die "--no-metadata specified and $REVDB not readable\n";
|
|
|
|
seek $fh, $offset, 2;
|
|
|
|
$rl = readline $fh;
|
|
|
|
defined $rl or return (undef, undef);
|
|
|
|
chomp $rl;
|
|
|
|
while ($c ne $rl && tell $fh != 0) {
|
|
|
|
$offset -= 41;
|
|
|
|
seek $fh, $offset, 2;
|
|
|
|
$rl = readline $fh;
|
|
|
|
defined $rl or return (undef, undef);
|
|
|
|
chomp $rl;
|
|
|
|
}
|
|
|
|
my $rev = tell $fh;
|
|
|
|
croak $! if ($rev < -1);
|
|
|
|
$rev = ($rev - 41) / 41;
|
|
|
|
close $fh or croak $!;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return ($rev, $c);
|
|
|
|
}
|
|
|
|
return (undef, undef);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_parse_revision {
|
|
|
|
my $base = shift;
|
|
|
|
my $head = $SVN->get_latest_revnum();
|
|
|
|
if (!defined $_revision || $_revision eq 'BASE:HEAD') {
|
|
|
|
return ($base + 1, $head) if (defined $base);
|
|
|
|
return (0, $head);
|
|
|
|
}
|
|
|
|
return ($1, $2) if ($_revision =~ /^(\d+):(\d+)$/);
|
|
|
|
return ($_revision, $_revision) if ($_revision =~ /^\d+$/);
|
|
|
|
if ($_revision =~ /^BASE:(\d+)$/) {
|
|
|
|
return ($base + 1, $1) if (defined $base);
|
|
|
|
return (0, $head);
|
|
|
|
}
|
|
|
|
return ($1, $head) if ($_revision =~ /^(\d+):HEAD$/);
|
|
|
|
die "revision argument: $_revision not understood by git-svn\n",
|
|
|
|
"Try using the command-line svn client instead\n";
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_traverse {
|
2006-07-20 16:43:01 +08:00
|
|
|
my ($gui, $pfx, $path, $rev, $files) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $cwd = "$pfx/$path";
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
$cwd =~ s#^/+##g;
|
|
|
|
my ($dirent, $r, $props) = $SVN->get_dir($cwd, $rev, $pool);
|
|
|
|
foreach my $d (keys %$dirent) {
|
|
|
|
my $t = $dirent->{$d}->kind;
|
|
|
|
if ($t == $SVN::Node::dir) {
|
2006-07-20 16:43:01 +08:00
|
|
|
libsvn_traverse($gui, $cwd, $d, $rev, $files);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
} elsif ($t == $SVN::Node::file) {
|
2006-07-20 16:43:01 +08:00
|
|
|
my $file = "$cwd/$d";
|
|
|
|
if (defined $files) {
|
|
|
|
push @$files, $file;
|
|
|
|
} else {
|
2006-11-05 13:51:10 +08:00
|
|
|
libsvn_get_file($gui, $file, $rev, 'A');
|
2006-07-20 16:43:01 +08:00
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_traverse_ignore {
|
|
|
|
my ($fh, $path, $r) = @_;
|
|
|
|
$path =~ s#^/+##g;
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my ($dirent, undef, $props) = $SVN->get_dir($path, $r, $pool);
|
|
|
|
my $p = $path;
|
|
|
|
$p =~ s#^\Q$SVN_PATH\E/?##;
|
|
|
|
print $fh length $p ? "\n# $p\n" : "\n# /\n";
|
|
|
|
if (my $s = $props->{'svn:ignore'}) {
|
|
|
|
$s =~ s/[\r\n]+/\n/g;
|
|
|
|
chomp $s;
|
|
|
|
if (length $p == 0) {
|
|
|
|
$s =~ s#\n#\n/$p#g;
|
|
|
|
print $fh "/$s\n";
|
|
|
|
} else {
|
|
|
|
$s =~ s#\n#\n/$p/#g;
|
|
|
|
print $fh "/$p/$s\n";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (sort keys %$dirent) {
|
|
|
|
next if $dirent->{$_}->kind != $SVN::Node::dir;
|
|
|
|
libsvn_traverse_ignore($fh, "$path/$_", $r);
|
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
}
|
|
|
|
|
2006-06-13 19:02:23 +08:00
|
|
|
sub revisions_eq {
|
|
|
|
my ($path, $r0, $r1) = @_;
|
|
|
|
return 1 if $r0 == $r1;
|
|
|
|
my $nr = 0;
|
|
|
|
if ($_use_lib) {
|
|
|
|
# should be OK to use Pool here (r1 - r0) should be small
|
|
|
|
my $pool = SVN::Pool->new;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
libsvn_get_log($SVN, "/$path", $r0, $r1,
|
|
|
|
0, 1, 1, sub {$nr++}, $pool);
|
2006-06-13 19:02:23 +08:00
|
|
|
$pool->clear;
|
|
|
|
} else {
|
|
|
|
my ($url, undef) = repo_path_split($SVN_URL);
|
|
|
|
my $svn_log = svn_log_raw("$url/$path","-r$r0:$r1");
|
|
|
|
while (next_log_entry($svn_log)) { $nr++ }
|
|
|
|
close $svn_log->{fh};
|
|
|
|
}
|
|
|
|
return 0 if ($nr > 1);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_find_parent_branch {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my ($paths, $rev, $author, $date, $msg) = @_;
|
|
|
|
my $svn_path = '/'.$SVN_PATH;
|
|
|
|
|
|
|
|
# look for a parent from another branch:
|
2006-06-16 03:50:12 +08:00
|
|
|
my $i = $paths->{$svn_path} or return;
|
|
|
|
my $branch_from = $i->copyfrom_path or return;
|
|
|
|
my $r = $i->copyfrom_rev;
|
|
|
|
print STDERR "Found possible branch point: ",
|
|
|
|
"$branch_from => $svn_path, $r\n";
|
|
|
|
$branch_from =~ s#^/##;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
my $l_map = {};
|
|
|
|
read_url_paths_all($l_map, '', "$GIT_DIR/svn");
|
2006-06-16 03:50:12 +08:00
|
|
|
my $url = $SVN->{url};
|
|
|
|
defined $l_map->{$url} or return;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
my $id = $l_map->{$url}->{$branch_from};
|
|
|
|
if (!defined $id && $_follow_parent) {
|
|
|
|
print STDERR "Following parent: $branch_from\@$r\n";
|
|
|
|
# auto create a new branch and follow it
|
|
|
|
$id = basename($branch_from);
|
|
|
|
$id .= '@'.$r if -r "$GIT_DIR/svn/$id";
|
|
|
|
while (-r "$GIT_DIR/svn/$id") {
|
|
|
|
# just grow a tail if we're not unique enough :x
|
|
|
|
$id .= '-';
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return unless defined $id;
|
|
|
|
|
2006-06-16 03:50:12 +08:00
|
|
|
my ($r0, $parent) = find_rev_before($r,$id,1);
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
if ($_follow_parent && (!defined $r0 || !defined $parent)) {
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
|
|
|
$SVN_URL = "$url/$branch_from";
|
|
|
|
$SVN_LOG = $SVN = undef;
|
|
|
|
setup_git_svn();
|
|
|
|
# we can't assume SVN_URL exists at r+1:
|
|
|
|
$_revision = "0:$r";
|
|
|
|
fetch_lib();
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
($r0, $parent) = find_rev_before($r,$id,1);
|
|
|
|
}
|
2006-06-16 03:50:12 +08:00
|
|
|
return unless (defined $r0 && defined $parent);
|
|
|
|
if (revisions_eq($branch_from, $r0, $r)) {
|
|
|
|
unlink $GIT_SVN_INDEX;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
print STDERR "Found branch parent: ($GIT_SVN) $parent\n";
|
2006-06-16 03:50:12 +08:00
|
|
|
sys(qw/git-read-tree/, $parent);
|
|
|
|
return libsvn_fetch($parent, $paths, $rev,
|
|
|
|
$author, $date, $msg);
|
|
|
|
}
|
|
|
|
print STDERR "Nope, branch point not imported or unknown\n";
|
2006-06-13 19:02:23 +08:00
|
|
|
return undef;
|
|
|
|
}
|
|
|
|
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 18:07:14 +08:00
|
|
|
sub libsvn_get_log {
|
|
|
|
my ($ra, @args) = @_;
|
|
|
|
if ($SVN::Core::VERSION le '1.2.0') {
|
|
|
|
splice(@args, 3, 1);
|
|
|
|
}
|
|
|
|
$ra->get_log(@args);
|
|
|
|
}
|
|
|
|
|
2006-06-13 19:02:23 +08:00
|
|
|
sub libsvn_new_tree {
|
|
|
|
if (my $log_entry = libsvn_find_parent_branch(@_)) {
|
|
|
|
return $log_entry;
|
|
|
|
}
|
|
|
|
my ($paths, $rev, $author, $date, $msg) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
open my $gui, '| git-update-index -z --index-info' or croak $!;
|
2006-07-20 16:43:01 +08:00
|
|
|
libsvn_traverse($gui, '', $SVN_PATH, $rev);
|
2006-06-16 03:50:12 +08:00
|
|
|
close $gui or croak $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
return libsvn_log_entry($rev, $author, $date, $msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub find_graft_path_commit {
|
|
|
|
my ($tree_paths, $p1, $r1) = @_;
|
|
|
|
foreach my $x (keys %$tree_paths) {
|
|
|
|
next unless ($p1 =~ /^\Q$x\E/);
|
|
|
|
my $i = $tree_paths->{$x};
|
2006-06-13 19:02:23 +08:00
|
|
|
my ($r0, $parent) = find_rev_before($r1,$i,1);
|
|
|
|
return $parent if (defined $r0 && $r0 == $r1);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
print STDERR "r$r1 of $i not imported\n";
|
|
|
|
next;
|
|
|
|
}
|
|
|
|
return undef;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub find_graft_path_parents {
|
|
|
|
my ($grafts, $tree_paths, $c, $p0, $r0) = @_;
|
|
|
|
foreach my $x (keys %$tree_paths) {
|
|
|
|
next unless ($p0 =~ /^\Q$x\E/);
|
|
|
|
my $i = $tree_paths->{$x};
|
2006-06-13 19:02:23 +08:00
|
|
|
my ($r, $parent) = find_rev_before($r0, $i, 1);
|
|
|
|
if (defined $r && defined $parent && revisions_eq($x,$r,$r0)) {
|
2006-06-28 10:39:11 +08:00
|
|
|
my ($url_b, undef, $uuid_b) = cmt_metadata($c);
|
|
|
|
my ($url_a, undef, $uuid_a) = cmt_metadata($parent);
|
|
|
|
next if ($url_a && $url_b && $url_a eq $url_b &&
|
|
|
|
$uuid_b eq $uuid_a);
|
2006-06-13 19:02:23 +08:00
|
|
|
$grafts->{$c}->{$parent} = 1;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_graft_file_copies {
|
|
|
|
my ($grafts, $tree_paths, $path, $paths, $rev) = @_;
|
|
|
|
foreach (keys %$paths) {
|
|
|
|
my $i = $paths->{$_};
|
|
|
|
my ($m, $p0, $r0) = ($i->action, $i->copyfrom_path,
|
|
|
|
$i->copyfrom_rev);
|
|
|
|
next unless (defined $p0 && defined $r0);
|
|
|
|
|
|
|
|
my $p1 = $_;
|
|
|
|
$p1 =~ s#^/##;
|
|
|
|
$p0 =~ s#^/##;
|
|
|
|
my $c = find_graft_path_commit($tree_paths, $p1, $rev);
|
|
|
|
next unless $c;
|
|
|
|
find_graft_path_parents($grafts, $tree_paths, $c, $p0, $r0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub set_index {
|
|
|
|
my $old = $ENV{GIT_INDEX_FILE};
|
|
|
|
$ENV{GIT_INDEX_FILE} = shift;
|
|
|
|
return $old;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub restore_index {
|
|
|
|
my ($old) = @_;
|
|
|
|
if (defined $old) {
|
|
|
|
$ENV{GIT_INDEX_FILE} = $old;
|
|
|
|
} else {
|
|
|
|
delete $ENV{GIT_INDEX_FILE};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_commit_cb {
|
|
|
|
my ($rev, $date, $committer, $c, $msg, $r_last, $cmt_last) = @_;
|
2006-06-13 19:02:23 +08:00
|
|
|
if ($_optimize_commits && $rev == ($r_last + 1)) {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $log = libsvn_log_entry($rev,$committer,$date,$msg);
|
|
|
|
$log->{tree} = get_tree_from_treeish($c);
|
|
|
|
my $cmt = git_commit($log, $cmt_last, $c);
|
|
|
|
my @diff = safe_qx('git-diff-tree', $cmt, $c);
|
|
|
|
if (@diff) {
|
|
|
|
print STDERR "Trees differ: $cmt $c\n",
|
|
|
|
join('',@diff),"\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
} else {
|
2006-06-16 03:50:12 +08:00
|
|
|
fetch("$rev=$c");
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_ls_fullurl {
|
|
|
|
my $fullurl = shift;
|
|
|
|
my ($repo, $path) = repo_path_split($fullurl);
|
|
|
|
$SVN ||= libsvn_connect($repo);
|
|
|
|
my @ret;
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my ($dirent, undef, undef) = $SVN->get_dir($path,
|
|
|
|
$SVN->get_latest_revnum, $pool);
|
|
|
|
foreach my $d (keys %$dirent) {
|
|
|
|
if ($dirent->{$d}->kind == $SVN::Node::dir) {
|
|
|
|
push @ret, "$d/"; # add '/' for compat with cli svn
|
|
|
|
}
|
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
return @ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
sub libsvn_skip_unknown_revs {
|
|
|
|
my $err = shift;
|
|
|
|
my $errno = $err->apr_err();
|
|
|
|
# Maybe the branch we're tracking didn't
|
|
|
|
# exist when the repo started, so it's
|
|
|
|
# not an error if it doesn't, just continue
|
|
|
|
#
|
|
|
|
# Wonderfully consistent library, eh?
|
|
|
|
# 160013 - svn:// and file://
|
|
|
|
# 175002 - http(s)://
|
|
|
|
# More codes may be discovered later...
|
|
|
|
if ($errno == 175002 || $errno == 160013) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
croak "Error from SVN, ($errno): ", $err->expanded_message,"\n";
|
|
|
|
};
|
|
|
|
|
2006-06-13 19:02:23 +08:00
|
|
|
# Tie::File seems to be prone to offset errors if revisions get sparse,
|
|
|
|
# it's not that fast, either. Tie::File is also not in Perl 5.6. So
|
|
|
|
# one of my favorite modules is out :< Next up would be one of the DBM
|
|
|
|
# modules, but I'm not sure which is most portable... So I'll just
|
|
|
|
# go with something that's plain-text, but still capable of
|
|
|
|
# being randomly accessed. So here's my ultra-simple fixed-width
|
|
|
|
# database. All records are 40 characters + "\n", so it's easy to seek
|
|
|
|
# to a revision: (41 * rev) is the byte offset.
|
|
|
|
# A record of 40 0s denotes an empty revision.
|
|
|
|
# And yes, it's still pretty fast (faster than Tie::File).
|
|
|
|
sub revdb_set {
|
|
|
|
my ($file, $rev, $commit) = @_;
|
|
|
|
length $commit == 40 or croak "arg3 must be a full SHA1 hexsum\n";
|
|
|
|
open my $fh, '+<', $file or croak $!;
|
|
|
|
my $offset = $rev * 41;
|
|
|
|
# assume that append is the common case:
|
|
|
|
seek $fh, 0, 2 or croak $!;
|
|
|
|
my $pos = tell $fh;
|
|
|
|
if ($pos < $offset) {
|
|
|
|
print $fh (('0' x 40),"\n") x (($offset - $pos) / 41);
|
|
|
|
}
|
|
|
|
seek $fh, $offset, 0 or croak $!;
|
|
|
|
print $fh $commit,"\n";
|
|
|
|
close $fh or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub revdb_get {
|
|
|
|
my ($file, $rev) = @_;
|
|
|
|
my $ret;
|
|
|
|
my $offset = $rev * 41;
|
|
|
|
open my $fh, '<', $file or croak $!;
|
|
|
|
seek $fh, $offset, 0;
|
|
|
|
if (tell $fh == $offset) {
|
|
|
|
$ret = readline $fh;
|
|
|
|
if (defined $ret) {
|
|
|
|
chomp $ret;
|
|
|
|
$ret = undef if ($ret =~ /^0{40}$/);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
close $fh or croak $!;
|
|
|
|
return $ret;
|
|
|
|
}
|
|
|
|
|
2006-06-16 17:55:13 +08:00
|
|
|
sub copy_remote_ref {
|
|
|
|
my $origin = $_cp_remote ? $_cp_remote : 'origin';
|
|
|
|
my $ref = "refs/remotes/$GIT_SVN";
|
|
|
|
if (safe_qx('git-ls-remote', $origin, $ref)) {
|
|
|
|
sys(qw/git fetch/, $origin, "$ref:$ref");
|
2006-11-05 13:51:11 +08:00
|
|
|
} elsif ($_cp_remote && !$_upgrade) {
|
2006-06-16 17:55:13 +08:00
|
|
|
die "Unable to find remote reference: ",
|
|
|
|
"refs/remotes/$GIT_SVN on $origin\n";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
package SVN::Git::Editor;
|
|
|
|
use vars qw/@ISA/;
|
|
|
|
use strict;
|
|
|
|
use warnings;
|
|
|
|
use Carp qw/croak/;
|
|
|
|
use IO::File;
|
|
|
|
|
|
|
|
sub new {
|
|
|
|
my $class = shift;
|
|
|
|
my $git_svn = shift;
|
|
|
|
my $self = SVN::Delta::Editor->new(@_);
|
|
|
|
bless $self, $class;
|
|
|
|
foreach (qw/svn_path c r ra /) {
|
|
|
|
die "$_ required!\n" unless (defined $git_svn->{$_});
|
|
|
|
$self->{$_} = $git_svn->{$_};
|
|
|
|
}
|
|
|
|
$self->{pool} = SVN::Pool->new;
|
|
|
|
$self->{bat} = { '' => $self->open_root($self->{r}, $self->{pool}) };
|
|
|
|
$self->{rm} = { };
|
|
|
|
require Digest::MD5;
|
|
|
|
return $self;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub split_path {
|
|
|
|
return ($_[0] =~ m#^(.*?)/?([^/]+)$#);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub repo_path {
|
|
|
|
(defined $_[1] && length $_[1]) ? "$_[0]->{svn_path}/$_[1]"
|
|
|
|
: $_[0]->{svn_path}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub url_path {
|
|
|
|
my ($self, $path) = @_;
|
|
|
|
$self->{ra}->{url} . '/' . $self->repo_path($path);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub rmdirs {
|
2006-06-28 10:39:14 +08:00
|
|
|
my ($self, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my $rm = $self->{rm};
|
|
|
|
delete $rm->{''}; # we never delete the url we're tracking
|
|
|
|
return unless %$rm;
|
|
|
|
|
|
|
|
foreach (keys %$rm) {
|
|
|
|
my @d = split m#/#, $_;
|
|
|
|
my $c = shift @d;
|
|
|
|
$rm->{$c} = 1;
|
|
|
|
while (@d) {
|
|
|
|
$c .= '/' . shift @d;
|
|
|
|
$rm->{$c} = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
delete $rm->{$self->{svn_path}};
|
|
|
|
delete $rm->{''}; # we never delete the url we're tracking
|
|
|
|
return unless %$rm;
|
|
|
|
|
|
|
|
defined(my $pid = open my $fh,'-|') or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
exec qw/git-ls-tree --name-only -r -z/, $self->{c} or croak $!;
|
|
|
|
}
|
|
|
|
local $/ = "\0";
|
2006-06-20 08:59:35 +08:00
|
|
|
my @svn_path = split m#/#, $self->{svn_path};
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
while (<$fh>) {
|
|
|
|
chomp;
|
2006-06-20 08:59:35 +08:00
|
|
|
my @dn = (@svn_path, (split m#/#, $_));
|
|
|
|
while (pop @dn) {
|
|
|
|
delete $rm->{join '/', @dn};
|
|
|
|
}
|
|
|
|
unless (%$rm) {
|
|
|
|
close $fh;
|
|
|
|
return;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
}
|
2006-06-20 08:59:35 +08:00
|
|
|
close $fh;
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my ($r, $p, $bat) = ($self->{r}, $self->{pool}, $self->{bat});
|
|
|
|
foreach my $d (sort { $b =~ tr#/#/# <=> $a =~ tr#/#/# } keys %$rm) {
|
|
|
|
$self->close_directory($bat->{$d}, $p);
|
|
|
|
my ($dn) = ($d =~ m#^(.*?)/?(?:[^/]+)$#);
|
2006-06-28 10:39:14 +08:00
|
|
|
print "\tD+\t/$d/\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$self->SUPER::delete_entry($d, $r, $bat->{$dn}, $p);
|
|
|
|
delete $bat->{$d};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub open_or_add_dir {
|
|
|
|
my ($self, $full_path, $baton) = @_;
|
|
|
|
my $p = SVN::Pool->new;
|
|
|
|
my $t = $self->{ra}->check_path($full_path, $self->{r}, $p);
|
|
|
|
$p->clear;
|
|
|
|
if ($t == $SVN::Node::none) {
|
|
|
|
return $self->add_directory($full_path, $baton,
|
|
|
|
undef, -1, $self->{pool});
|
|
|
|
} elsif ($t == $SVN::Node::dir) {
|
|
|
|
return $self->open_directory($full_path, $baton,
|
|
|
|
$self->{r}, $self->{pool});
|
|
|
|
}
|
|
|
|
print STDERR "$full_path already exists in repository at ",
|
|
|
|
"r$self->{r} and it is not a directory (",
|
|
|
|
($t == $SVN::Node::file ? 'file' : 'unknown'),"/$t)\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub ensure_path {
|
|
|
|
my ($self, $path) = @_;
|
|
|
|
my $bat = $self->{bat};
|
|
|
|
$path = $self->repo_path($path);
|
|
|
|
return $bat->{''} unless (length $path);
|
|
|
|
my @p = split m#/+#, $path;
|
|
|
|
my $c = shift @p;
|
|
|
|
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{''});
|
|
|
|
while (@p) {
|
|
|
|
my $c0 = $c;
|
|
|
|
$c .= '/' . shift @p;
|
|
|
|
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{$c0});
|
|
|
|
}
|
|
|
|
return $bat->{$c};
|
|
|
|
}
|
|
|
|
|
|
|
|
sub A {
|
2006-06-28 10:39:14 +08:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
|
|
|
|
undef, -1);
|
2006-06-28 10:39:14 +08:00
|
|
|
print "\tA\t$m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub C {
|
2006-06-28 10:39:14 +08:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
|
|
|
|
$self->url_path($m->{file_a}), $self->{r});
|
2006-06-28 10:39:14 +08:00
|
|
|
print "\tC\t$m->{file_a} => $m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub delete_entry {
|
|
|
|
my ($self, $path, $pbat) = @_;
|
|
|
|
my $rpath = $self->repo_path($path);
|
|
|
|
my ($dir, $file) = split_path($rpath);
|
|
|
|
$self->{rm}->{$dir} = 1;
|
|
|
|
$self->SUPER::delete_entry($rpath, $self->{r}, $pbat, $self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub R {
|
2006-06-28 10:39:14 +08:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
|
|
|
|
$self->url_path($m->{file_a}), $self->{r});
|
2006-06-28 10:39:14 +08:00
|
|
|
print "\tR\t$m->{file_a} => $m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
|
|
|
|
($dir, $file) = split_path($m->{file_a});
|
|
|
|
$pbat = $self->ensure_path($dir);
|
|
|
|
$self->delete_entry($m->{file_a}, $pbat);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub M {
|
2006-06-28 10:39:14 +08:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->open_file($self->repo_path($m->{file_b}),
|
|
|
|
$pbat,$self->{r},$self->{pool});
|
2006-06-28 10:39:14 +08:00
|
|
|
print "\t$m->{chg}\t$m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub T { shift->M(@_) }
|
|
|
|
|
|
|
|
sub change_file_prop {
|
|
|
|
my ($self, $fbat, $pname, $pval) = @_;
|
|
|
|
$self->SUPER::change_file_prop($fbat, $pname, $pval, $self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub chg_file {
|
|
|
|
my ($self, $fbat, $m) = @_;
|
|
|
|
if ($m->{mode_b} =~ /755$/ && $m->{mode_a} !~ /755$/) {
|
|
|
|
$self->change_file_prop($fbat,'svn:executable','*');
|
|
|
|
} elsif ($m->{mode_b} !~ /755$/ && $m->{mode_a} =~ /755$/) {
|
|
|
|
$self->change_file_prop($fbat,'svn:executable',undef);
|
|
|
|
}
|
|
|
|
my $fh = IO::File->new_tmpfile or croak $!;
|
|
|
|
if ($m->{mode_b} =~ /^120/) {
|
|
|
|
print $fh 'link ' or croak $!;
|
|
|
|
$self->change_file_prop($fbat,'svn:special','*');
|
|
|
|
} elsif ($m->{mode_a} =~ /^120/ && $m->{mode_b} !~ /^120/) {
|
|
|
|
$self->change_file_prop($fbat,'svn:special',undef);
|
|
|
|
}
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
open STDOUT, '>&', $fh or croak $!;
|
|
|
|
exec qw/git-cat-file blob/, $m->{sha1_b} or croak $!;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
$fh->flush == 0 or croak $!;
|
|
|
|
seek $fh, 0, 0 or croak $!;
|
|
|
|
|
|
|
|
my $md5 = Digest::MD5->new;
|
|
|
|
$md5->addfile($fh) or croak $!;
|
|
|
|
seek $fh, 0, 0 or croak $!;
|
|
|
|
|
|
|
|
my $exp = $md5->hexdigest;
|
2006-10-15 06:48:35 +08:00
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my $atd = $self->apply_textdelta($fbat, undef, $pool);
|
|
|
|
my $got = SVN::TxDelta::send_stream($fh, @$atd, $pool);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
die "Checksum mismatch\nexpected: $exp\ngot: $got\n" if ($got ne $exp);
|
2006-10-15 06:48:35 +08:00
|
|
|
$pool->clear;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
|
|
|
|
close $fh or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub D {
|
2006-06-28 10:39:14 +08:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
2006-06-28 10:39:14 +08:00
|
|
|
print "\tD\t$m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
$self->delete_entry($m->{file_b}, $pbat);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub close_edit {
|
|
|
|
my ($self) = @_;
|
|
|
|
my ($p,$bat) = ($self->{pool}, $self->{bat});
|
|
|
|
foreach (sort { $b =~ tr#/#/# <=> $a =~ tr#/#/# } keys %$bat) {
|
|
|
|
$self->close_directory($bat->{$_}, $p);
|
|
|
|
}
|
|
|
|
$self->SUPER::close_edit($p);
|
|
|
|
$p->clear;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub abort_edit {
|
|
|
|
my ($self) = @_;
|
|
|
|
$self->SUPER::abort_edit($self->{pool});
|
|
|
|
$self->{pool}->clear;
|
|
|
|
}
|
|
|
|
|
2006-02-16 17:24:16 +08:00
|
|
|
__END__
|
|
|
|
|
|
|
|
Data structures:
|
|
|
|
|
2006-03-26 10:52:31 +08:00
|
|
|
$svn_log hashref (as returned by svn_log_raw)
|
|
|
|
{
|
|
|
|
fh => file handle of the log file,
|
|
|
|
state => state of the log file parser (sep/msg/rev/msg_start...)
|
|
|
|
}
|
2006-02-16 17:24:16 +08:00
|
|
|
|
2006-03-26 10:52:31 +08:00
|
|
|
$log_msg hashref as returned by next_log_entry($svn_log)
|
2006-02-16 17:24:16 +08:00
|
|
|
{
|
|
|
|
msg => 'whitespace-formatted log entry
|
|
|
|
', # trailing newline is preserved
|
|
|
|
revision => '8', # integer
|
|
|
|
date => '2004-02-24T17:01:44.108345Z', # commit date
|
|
|
|
author => 'committer name'
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
@mods = array of diff-index line hashes, each element represents one line
|
|
|
|
of diff-index output
|
|
|
|
|
|
|
|
diff-index line ($m hash)
|
|
|
|
{
|
|
|
|
mode_a => first column of diff-index output, no leading ':',
|
|
|
|
mode_b => second column of diff-index output,
|
|
|
|
sha1_b => sha1sum of the final blob,
|
2006-03-03 17:20:07 +08:00
|
|
|
chg => change type [MCRADT],
|
2006-02-16 17:24:16 +08:00
|
|
|
file_a => original file name of a file (iff chg is 'C' or 'R')
|
|
|
|
file_b => new/current file name of a file (any chg)
|
|
|
|
}
|
|
|
|
;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 10:39:13 +08:00
|
|
|
# retval of read_url_paths{,_all}();
|
|
|
|
$l_map = {
|
|
|
|
# repository root url
|
|
|
|
'https://svn.musicpd.org' => {
|
|
|
|
# repository path # GIT_SVN_ID
|
|
|
|
'mpd/trunk' => 'trunk',
|
|
|
|
'mpd/tags/0.11.5' => 'tags/0.11.5',
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 06:23:48 +08:00
|
|
|
Notes:
|
|
|
|
I don't trust the each() function on unless I created %hash myself
|
|
|
|
because the internal iterator may not have started at base.
|