I am looking for a best practice, solution, or just a recommendation on how the automation with Jenkins is usually organized on small and medium-sized projects.
I have a small project that consists of two applications (a client and a server). I rent a server where I host these applications. I started looking at how I could automate the process of deploying some changes to the server. So I started reading about the topic CI/CD and tools used to facilitate it. I have no tests to run. Mostly, I want to get rid of the following steps once I make some changes to one of the applications:
- connect to the server through ssh;
- cd into one of the application’s folder;
- stop the running process;
- git fetch & git merge;
- rebuild the application;
- run a new process.
The question has appeared after I noticed that most of the articles I read about CI/CD and Jenkins demonstrated examples of automation with the use of Jenkins Pipelines. And most of those examples had a pipeline with at least three stages in it: build, test, and deploy. The stage deploy performed either the process of building a docker image and deploying it to the registry or executing a bash script with an abstract name such as publish. The word deploy as a name for a stage in a Jenkins pipeline and especially examples with publishing a docker image to a registry make me think that there should be at least two servers. The first server hosts Jenkins and its pipelines, and other host applications.
In my case, I consider running a simple bash script in a pipeline or freestyle job. And whatever it will be, I want Jenkins to make it on the same server with my applications. Is it considered a common practice?
Source: Docker Questions