The power of scripting and automation
One of the most powerful things of all *nix systems is the ability to utilise daisy-chained, modular commands to achieve our goals, with one of the most powerful tools in the arsenal being cURL.
For lack of better terminology, cURL is basically a wrapper command structure allowing you to run other commands within it's framework.
As a quick example, your client comes to you and says, "I need to periodically download all files from our banking system's sFTP server once a day and import them automatically into our ERP."
Thinking through the processes needed to facilitate this request, and to begin thinking like a system administrator solving your client's task, following are the basic steps of the process:
- We need to connect to the sFTP server.
- We want to choose our destination folder.
- We want to keep the names of the destination files the same as the source files.
- Cycle through the list of files and download them.
Some assumptions about this connection to the bank are that they will not provide just a simple username and password like most basic sites, and instead they'll use a private/public key pair for the authentication component, along with a username that takes us to a folder only accessible to our client when a connection is established.
So what are each of the tools we're going to use?
Shell scripting
Shell scripting is our method of running our cURL commands procedurally, or in sequence from top to bottom. For our use case, we don't need to really create classes and functions, just write a simply top-to-bottom script that runs our list of commands one by one, preferably with two additional things:
- Some error trapping (i.e. we want to know if the bank's server is running and if the connection fails, right? We also want to know if any files we're trying to download didn't download.)
cURL
cURL is designed to transfer data via all different kinds of protocols and we can use it to run commands.
Discussion