Copying Files Between Linux Machines

Hi,

Having completed my Nextdoor cloud installation (phase 1 of my new server build), I asked a friend what the quickest way to copy files from one Linux machine to another was. I could copy via my Windows machine but Windows copying always seems slow to me, it would be two copies and would deny me the opportunity to learn something else about Linux. He suggested scp, rsync (something like robocopy) and Caja (a GUI copier).

Knowing almost nothing, I started in on scp asking first if I had to create shares for the two machines (or for one of them)… apparently not.

I figured out where Nextcloud stores files, renamed an image file to test.jpg and tried the following command to copy a file to my target machine’s current folder:

scp @:/mnt/nvme01//files/test.jpg .

I supplied the password and got the error:

scp: /mnt/nvme01//files/test.jpg: No such file or directory

Clearly, there was an issue reaching the source file which I knew existed and could be seen using ls on the source machine (ls not having any network capability that I’m aware of).

Following his suggestions, I created a text file in the route of SOURCE, ran chmod 666 and copied it to TARGET (using scp) with no issues but doing the same in the folder I wanted to copy from still failed. I wondered if it was somehow the directory structure or permissions but then came up with a way to test it.

Knowing it worked from root, I repeated that process in /mnt (worked) then in /mnt/nvme01 (failed) and at that point realised it was somehow related to the process in which I’d mounted a partition to nvme01 on SOURCE.

Any idea how this can be resolved or is scp the wrong tool for this job?

James

1 Like

You’re on the right track, the error isn’t with scp itself but permissions or how the drive is mounted.

Since copying from /mnt worked but /mnt/nvme01 didn’t, your SSH user likely doesn’t have access to that mount point.

Try remounting the drive with user-friendly permissions and adjust directory permissions so the SSH user can traverse the paths.

Better yet, use rsync. rsync is generally better than scp for larger transfers anyway since it can resume and sync only changed files.

Example:

rsync -av --progress user@SOURCE:/mnt/nvme01/files/
1 Like

Hi Hayden,

I tried that…

rsync -av @:/mnt/nvme01/test.txt .

After supplying the password I got:

receiving incremental file list
rsync: [sender] change_dir "/mnt/nvme01" failed: Permission denied (13)

sent 8 bytes  received 8 bytes  2.13 bytes/sec
total size is 0  speedup is 0.00
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1865) [Receiver=3.2.7]
rsync: [Receiver] write error: Broken pipe (32)

Trying it again in just /mnt, it worked.

The drive is mounted by the following line in fstab:

LABEL=NXData /mnt/nvme01 ext4 nofail,noatime 0 1

Permissions applied during build were chmod 750 /mnt/nvme01 (I have a document originally based on a Raspberry Pi Nextcloud build)

James

1 Like

750 permission on /mnt/nvme01 only allows the owner and group in. If your SSH user isn’t in that group, it won’t be able to traverse the directory.

Doesn’t userid in rsync -av userid@:/mnt/nvme01/test.txt . supply the correct credentials (makes mental note not to use chevrons as they appear to make text invisible)?

My tests suggest 666 permission isn’t much good either yet still works below the actual mount point (/mnt).

Assuming I can’t add the user of TARGET to SOURCE, what permissions should I apply?

1 Like

Hi JamesCRocks,
I searched the next-cloud wiki and found this juicy little tidbit move Data from local to another machine

I took a look at the man page for rsync, it states at the top of the man page:

Name
rsync – a fast, versatile, remote (and local) file-copying tool
Synopsis

Local: rsync [OPTION…] SRC… [DEST]
Access via remote shell:
Pull: rsync [OPTION…] [USER@]HOST:SRC… [DEST]
Push: rsync [OPTION…] SRC… [USER@]HOST:DEST
Access via rsync daemon:
Pull: rsync [OPTION…] [USER@]HOST::SRC… [DEST]
rsync [OPTION…] rsync://[USER@]HOST[:PORT]/SRC… [DEST]
Push: rsync [OPTION…] SRC… [USER@]HOST::DEST
rsync [OPTION…] SRC… rsync://[USER@]HOST[:PORT]/DEST

Usages with just one SRC arg and no DEST arg will list the source files instead of copying.

I hope that helps.

1 Like

userid@SOURCE: just handles the SSH connection, but filesystem permissions are a separate layer. Your user still needs execute on each directory in the path and read on the files themselves. That’s why 666 on a file isn’t enough, and why it “works” under /mnt but fails at the mountpoint. Directories need execute to traverse.

I’ve used scp before, mostly “on the job”, but I’ve been retired for nearly eight years now. With my own systems I have much more freedom to use creative ways to copy stuff around. For example, if they are different machines of mine, I use snapshot ISO images on Flash Drives to copy around and also use them as backups. On the same system, I multi-boot, so it’s even faster and easier: I just mount the other partitions when I’m copying, but I don’t automount them. That way I copy or have immediate access to another distribution on the same computer only when I mount the partition to be used; unmount when the exchanges are complete.

2 Likes

Thanks for all the replies.

In the end, I cheated!

I gave up with trying to copy from the source location and, instead, archived all the files from that directory into a zip file in my home folder (on the source machine, there was just enough room on the MicroSD card for that), on the target I scp’d that across with no issues (I’d cd’d into the directory I wanted already) and then extracted all the files.

So, job done! In my defence I still used only Linux :slight_smile:

2 Likes

@JamesCRocks to me that’s a good solution; it’s very similar to what I was describing, and it also shows the choice and flexibility of the Linux infrastructure - more than one way to solve quite a few problems!

3 Likes