Problems with a shell script

Status
Not open for further replies.

grandad

Member
I recently wrote a wee shell script to run on my [Linux Mint] laptop.

The purpose of the script is to cycle through a series of servers and FTP down some files. This is done by setting up an array of variables and then cycling through the array -

Code:
count=0
while [[ $count -lt ${#SERVER[@]} ]]
do
ftp -v -i -n ${SERVER[$count]} <<END_OF_SESSION
user ${USER[$count]} ${PASSW[$count]}
cd webspace/httpdocs/backups
lcd ${LOCAL[$count]}
mget $REMOTEFILE
mdelete $OLDFILE
bye
END_OF_SESSION
(( count++ ))
done
This has worked perfectly, but today I ran into a problem. On the first of the month, some rather large files are involved. It cycled through the first few all right [files less than 50Mb] but baulked at the next - a file of 385Mb. I get my "150 Opening BINARY mode data connection" all right, and it completes the download but then just hangs. A Ctrl + C will cause it to skip onto the next cycle, but it repeatedly hangs on the larger [in excess of 200Mb] files.

Has anyone any ideas?

I should add that I am a complete novice when it comes to shell scripting, and the above is my first attempt!
 

dave

New Member
I recently wrote a wee shell script to run on my [Linux Mint] laptop.

The purpose of the script is to cycle through a series of servers and FTP down some files. This is done by setting up an array of variables and then cycling through the array -

Code:
count=0
while [[ $count -lt ${#SERVER[@]} ]]
do
ftp -v -i -n ${SERVER[$count]} <<END_OF_SESSION
user ${USER[$count]} ${PASSW[$count]}
cd webspace/httpdocs/backups
lcd ${LOCAL[$count]}
mget $REMOTEFILE
mdelete $OLDFILE
bye
END_OF_SESSION
(( count++ ))
done
This has worked perfectly, but today I ran into a problem. On the first of the month, some rather large files are involved. It cycled through the first few all right [files less than 50Mb] but baulked at the next - a file of 385Mb. I get my "150 Opening BINARY mode data connection" all right, and it completes the download but then just hangs. A Ctrl + C will cause it to skip onto the next cycle, but it repeatedly hangs on the larger [in excess of 200Mb] files.

Has anyone any ideas?

I should add that I am a complete novice when it comes to shell scripting, and the above is my first attempt!

Quick search lead me to linux - How to prevent TCP connection timeout when FTP'ing large file? - Server Fault

It's suggesting that ftp may be timing out on large files due to network conditions. The suggested fix was to switch to curl for FTP.
An example to set it up is here using curl to access ftp server » Linux by Examples


 

grandad

Member
Sincere apologies for not replying earlier, and thanks for the replies. [I was relying on an auto response email which I never got!]

I had a look at both curl and wget but ran into problems with both. I can't use wget as the files I am trying to access are above the root of the server. Curl on the other hand works perfectly buty only for single files.

My problem is that I need to access several servers. The server will contain at least one file [depending on the various Cron cycles, and the files take the format Sitename-Filetype-CurrentDate and an extension that depends on the file content [can be either a tar.gz or sql]. In other words, using *CurrentDate.* works for all servers and all filetypes. I tried setting up a curl script but as curl doesn't accept wildcards, the resulting script was worse than a mess.

What puzzles me is that FTP does work [and accept wildcards!] and even the largest files are successfully downloaded [I tried up to a 1Gb in size] so timeout is not the problem. The problem is that the script runs perfectly with files smaller than around 200Mb, and cycles nicely on to the next file after each download, but if the file is greater than 200Mb the script just hangs and the whole process stops. The downloaded file is complete and intact but the script refuses to move on. A Ctrl + C will set it back working again, but that sort of defeats the purpose of the exercise!
 

cssbutton

New Member
i am not an expert in shell, but what i want to suggest is that you make the program to skip large files and update a file on your computer on the location of file it skipped. With that you can download the larger files manually
 

grandad

Member
I did play with that idea, and also with the idea of splitting files on the server and reassembling them locally. Unfortunately they both went against my philosophy of "fire and forget"!

I have actually solved the problem by discovering lftp which does the job perfectly. However I'm still at a loss as to why plain ftp doesn't work.
 
Status
Not open for further replies.
Top