
Update 1.30: New Host Update
NOV 30 2025
Hello all,
In this update we move GunColony to a new hosting location, resulting in reduced ping and thus improved game responsiveness to almost everyone around the world! As part of the migration, we have also doubled the server’s RAM for extra stability, improved auto-restart to greatly reduce wait times, and significantly improved the software we use to update GunColony going forwards, letting us more easily support frequent server updates.
We have decided to not bundle any other content with the host switch to help us isolate any new issues. A new map and playlist update will be coming soon!
NEW HOST MIGRATION
We have worked around the clock during the Thanksgiving holiday to move GunColony to a new host machine.

It is run by the same hosting company as before, but is now located in the US East Coast rather than the Canadian location we have used for the last ~8 years. We made this move in order to reduce players’ ping to the server which is a very important gameplay factor, since we observed that the new location reduces ping to almost everyone compared to the previous location, possibly due to global internet cable layouts. Players in most regions around the world can expect to see around 5-20 ms lower ping than before, which is a significant gain comparable to the latency reduction of upping a server’s tick rate from 60Hz to 120Hz. In a fast-paced FPS like GunColony, every bit of latency matters, and we believe that the reduction can be fairly easily felt by a skilled player.
The only players that probably won’t see a latency reduction are those who live very close to the previous server location (Montreal), but you will still have excellent ping after this update. Even New England (in northeastern US) which is physically closer to the previous location than the new one will see ~10ms reduced latency after this update.
INCREASED MEMORY
We are now renting new server machine that has the same CPU as before (5600X) but twice the RAM and for less monthly cost to us than our previous server. (More RAM??? In this economy??)
The main impact of this change is that we are now allocating additional RAM to all of our servers, reducing the likelihood of servers encountering memory leaks and thus improving server stability. We now have a total of 128GB of RAM to distribute between our servers.
NEW AUTO-RESTART SCHEDULE
In this update, we have improved the daily auto-restart functionality of the server, greatly reducing your wait time before starting another match.
- The auto-restart now occurs at 08:00 UTC instead of 03:00 EST/EDT.
- This means that the restart now happens one hour later during the summer, but at the same time as before during winter.
- Players outside the US no longer need to deal with confusing restart times in the spring and autumn due to differences between US and global daylight savings dates.
- The minigame servers now restart in a staggered fashion instead of all at once.
- This prevents multiple simultaneous restarts from overloading the server’s CPU, making the restart of each individual game server much faster.
- Based on testing, this reduces restart times from around 95 seconds previously, to less than 30 seconds in the new system. This means that you have to wait 3X less time before you can jump back into the gamemode you were playing before the restart!
- This also spreads out the network burden on our new CI/CD package servers (more on that below).
- The full auto-restart schedule is now (in UTC):
- 08:00:00- Lobby restart
- 08:00:30- Testserver restart
- 08:01:00- StaffBuild restart
- 08:01:10- Login restart
- 08:01:30- PVP restart
- 08:01:45- Competitive restart
- 08:02:00- Endless restart
- 08:02:15- Mobarena restart
- 08:02:30- Mobarena2 restart
- 08:02:45- Mobarena3 restart
- The new auto-restart and message announcement schedules are designed to not only minimize the time taken by the whole restart process given server CPU limits, but also ensure that once players in game servers see the announcement that their server is restarting, the lobby server will have already finished restarting at that point, so players can immediately return the lobby server without problems.
MODERN SERVER DEPLOYMENT
(Note: the below sections are very technical)
In this update, we have changed how GunColony servers are deployed, making it more modern and easier for us to manage the network.
In the previous server deployment, we used to use a software called SubServers 2 to manage our minigame servers directly from the BungeeCord proxy. This gave us a few benefits such as the ability to easily start, stop, and restart the individual servers as well as to interact with their server consoles from one location. Unfortunately, this software is no longer maintained as of a few years ago (R.I.P. ME1312), forcing us to add custom patches just to keep it working with newer Minecraft versions.
In our new deployment, we have moved away from SubServers 2 and instead directly hosts all servers in our server admin panel. However, we developed new systems to avoid the extra manual update burden that might have come with such a change. We built new custom tools to make server administration easier, implementing the ability to restart and send commands to all servers from a central location just like before. However, it now affects not only the minigame servers but our lobby, staffbuild, login, and test servers as well, which makes it even easier for us to deploy new server updates and carry out other administrative tasks. Another welcome benefit is that our proxy server can now be restarted without also restarting all of the game servers like in the SubServers 2 system, which may reduce effective server downtime during some updates.
NEW END-TO-END CI/CD PIPELINE
CI/CD is a term in software development that roughly means the automatic deployment of code from the code editing website (such as GitHub) to the software running live (GunColony). In this update, we finally implemented a custom professional-grade CI/CD pipeline with many elements specific to our needs as a Minecraft server. This CI/CD pipeline is faster, more convenient, and more extensible, and minimizes the time and repetitive work needed to release frequent server updates in the future.
- We developed a new standardized format for defining the list of deployed servers, shared across of our CI/CD system to provide a server list for direct upload tasks. The server list is currently hard-coded but can easily be made dynamic to accommodate future auto-scaling setups.
- We implemented a new Server Wrapper software that runs right before each startup of each server, which automatically downloads the latest plugins and configs for the server type from the CI/CD packages system if new plugins have been uploaded.
- Whereas previously we had to upload plugins to each server to update the servers, now our servers can download plugins from a centralized package registry. This reduces what was previously a multi-step upload and sync process to a single step deployment that automatically updates all plugins on next restart.
- This also fixes the previous problem where if we upload plugins (particularly Place’s Bukkit plugin mode) while the server is running, this can cause errors in the running server due to classloading issues. The issue even affected server shutdown and caused the server to deadlock during shutdown due to failure to load Kotlin Coroutine classes, meaning that we either had to shut down the servers before the upload and accept higher downtime, or force kill the servers after we uploaded the plugins. Our new system fixes the issue because we no longer need to upload plugins to running servers anymore.
- We implemented a new configuration uploading system to update config files to running servers, since GunColony’s custom plugins allow hot-reloading new configs to apply an update without a restart if there have been no code changes.
- Our previous system consisted of local IDE run configurations to deploy configs directly from our development machine, while our new system is flexible and can be run either locally or with a CI/CD pipeline, with development and production deployment options.
- The previous system used to run all reload commands for all plugins regardless of whether the configs had actually been updated. In contrast, the new system detects whether configs have been updated and only runs reload commands on plugins that have updated configs. This prevents the long freezes whenever we applied a new update to the server.
- The new system is much faster, taking only a few seconds to run locally and about 3-5x shorter than the previous system, leading to much faster development turnaround times when testing on the test server, from ~20s down to ~5s when combined with the new selective reload command execution feature. The speed gains are because the new system is thoroughly parallelized, and also uploads configs by ZIP file compared to the previous system which uploaded them one-by-one.
- The new system also packages the configs so that they can be downloaded by the Server Wrapper at the next server restart. Multiple packages are created based on the set of config files needed by each type of server. There is a separate deployment mode to only package the configs but not upload them to running servers, in order to wait until the next server restart to apply the new configs alongside any plugin updates.
- We implemented automatic CI/CD workflows to update plugins and configs to development and production servers when uploading to respective Git branches.
- The CI/CD workflows are based on Forgejo, an open-source self-hosted Git host, rather than GitHub, allowing us to maintain full control over our development setup. All our CI/CD jobs run on our own servers.
- The new system was heavily planned and built with the aid of AI tools, since micro-level code quality was less crucial due to being an internal pipeline, and the quality, extensibility, and security of the high-level design was prioritized instead.
- Thanks to improved tools, we were able to build the entire system in around 10 days while simultaneously working on the server migration.




