Compare commits

...
This repository has been archived on 2025-04-03. You can view files and clone it, but cannot push or open issues or pull requests.

15 Commits

Author SHA1 Message Date
ea25753c09 Update README to reflect repository migration to GitHub 2025-04-03 21:57:50 +02:00
4d08911bdc Add Proxmox Container Update Script and update README 2025-04-03 21:22:03 +02:00
5b41b49ed4 Fixes and improvements for runai script 2024-09-04 12:13:39 +02:00
64b72b49d7 Removed OPENAI_API_KEY from script to prevent override by environment variable 2024-08-26 10:28:44 +02:00
70f136eaca Added openai bash integration script 2024-08-23 10:12:09 +02:00
454c45c51b Update gitea/auto_mapper/auto_mapper.py
Fixed wrong default values
2024-08-13 10:29:07 +02:00
1e3cf595bc Updated README and added DEBUG argument to auto_mapper 2024-08-06 21:40:50 +02:00
24d6c3008f Finished Dockerfile 2024-08-06 21:30:36 +02:00
9a0bbd79ae Added Dockerfile for easier script use 2024-08-06 21:21:41 +02:00
ff49e11010 Fixed copy error when a team does not exist in target organization 2024-08-06 21:21:20 +02:00
6aa5172634 Added new script: gitlab teams auto mapper 2024-08-01 18:02:50 +02:00
bbf604374a Merge branch 'main' of 192.168.8.40:ZionNetworks/linux-bash-scripts 2024-07-28 07:40:01 +02:00
034a819259 Improved error handling; added stats; implemented remaining overrride flags 2024-07-28 07:39:29 +02:00
e227bff1e6 Updated README 2024-07-19 10:31:42 +02:00
d81e5700ef Grammar fix in README 2024-07-19 10:28:25 +02:00
12 changed files with 1564 additions and 56 deletions

4
.gitignore vendored
View File

@ -1,4 +1,6 @@
.env
**/*.env
*.log
gitlab2gitea/gitea_projects.json
gitlab2gitea/gitlab_projects.json
.venv*
.vscode

View File

@ -1,3 +1,7 @@
> [!IMPORTANT]
> This repository has moved to GitHub and was archived. You can find the latest version at [https://github.com/zion-networks/linux-bash-scripts](https://github.com/zion-networks/linux-bash-scripts).
# Linux Scripts Collection by Zion Networks
### Who is Zion Networks?
@ -10,8 +14,11 @@ We're planning to release a growing amount of open source software, that is free
### What scripts can be found here?
| Name | Description | License | Current Version | Written in | Supported Distros | File |
| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | --------------- | ---------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| Borgmatic Setup Tool | If you plan to use borg as backup solution, you should also take a look at [borgmatic](https://torsion.org/borgmatic/). It's a Python wrapper for the award winning backup tool [borgbackup](https://borgbackup.readthedocs.io/en/stable/index.html) that simplifies creating secure and reliable backups even more. You can even store your configurations in files. This script will do the setup for you to. | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.2.1 | Bash | Debian and derivates | [bormatic_setup.sh](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/borgmatic/borgmatic_setup.sh) |
| Git Rewrite Author | **USE WITH CAUTION!!!**<br><br> This script will rewrite the entire history of the remote end and set the author email to the provided one.<br><br>**This is NOT reversable!** | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.0.0 | Bash | Most Linux distros | [git_rewrite_author.sh](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/git/git_rewrite_author.sh) |
| UFW Beautifier | Simple Python script to get a fancy formatted `ufw.log` | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.0.0 | Python | Most Linux distros | [ufw_beautifier.py](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/ufw/ufw_beautifier.py) |
| Name | Description | License | Current Version | Written in | Supported Distros | Path |
| ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | --------------- | ---------- | -------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| Proxmox VE Container Update Helper | Simplify updating your Proxmox VE containers and perform updates or upgrades on either all or specified containers. You can include and exclude specific container IDs. | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.0.0 | Bash | Proxmox VE Debian | [Proxmox VE PCT Update Helper](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/proxmox/)
| Borgmatic Setup Tool | If you plan to use borg as backup solution, you should also take a look at [borgmatic](https://torsion.org/borgmatic/). It's a Python wrapper for the award winning backup tool [borgbackup](https://borgbackup.readthedocs.io/en/stable/index.html) that simplifies creating secure and reliable backups even more. You can even store your configurations in files. This script will do the setup for you to. | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.2.1 | Bash | Debian and derivates | [Borgmatic Setup](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/borgmatic/) |
| Git Rewrite Author | **USE WITH CAUTION!!!**<br><br> This script will rewrite the entire history of the remote end and set the author email to the provided one.<br><br>**This is NOT reversible!** | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.0.0 | Bash | Most Linux distros | [Git Scripts](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/git/) |
| UFW Beautifier | Simple Python script to get a fancy formatted `ufw.log` | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.0.0 | Python | Most Linux distros | [UFW Beautifier](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/ufw/) |
| Gitlab2Gitea Migration | Python script to perform a full migration from Gitlab to Gitea | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.0.0-pre | Python | Most Linux distros | [Gitlab to Gitea](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/gitlab2gitea/) |
| Gitea Teams Auto Mapper | Python script to automatically copy teams from a source organization to other organizations | [MIT]([LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE)) | v1.0.0 | Python | Most Linux distros | [Gitlab Teams Auto Mapper](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/gitea/auto_mapper/) |

View File

@ -1,5 +1,5 @@
# Tools
| Name | Description | Usage | File |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------- | ---- |
| git_rewrite_author | **USE WITH CAUTION!!!**<br><br> This script will rewrite the entire history of the remote end and set the author email to the provided one.<br><br>**This is NOT reversable!** | `./git_rewrite_author.sh "/path/to/repo" "new.mail@address.com"` |
| Name | Description | Usage | File |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------- | ---- |
| git_rewrite_author | **USE WITH CAUTION!!!**<br><br> This script will rewrite the entire history of the remote end and set the author email to the provided one.<br><br>**This is NOT reversible!** | `./git_rewrite_author.sh "/path/to/repo" "new.mail@address.com"` |

View File

@ -0,0 +1,16 @@
# Use a Python base image
FROM python:3.9 AS zn-gitea-auto_mapper
# Set the working directory
WORKDIR /app
# Copy the Python file, .env file, and requirements.txt to the working directory
COPY auto_mapper.py .
COPY .env .
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Run the Python file
CMD ["python", "auto_mapper.py"]

View File

@ -0,0 +1,71 @@
# Gitea Teams auto mapping
## Description
This script will copy / update all teams (excluding `Owners`) from a specified source organization to all other organizations (except for excluded ones). It can be used to achieve a more controllable permissions hierarchy within Gitea, similar to sub-groups as being available in Gitlab.
## Author(s)
- [Enrico Ludwig](https://git.zion-networks.de/eludwig) <[enrico.ludwig@zion-networks.de](mailto:enrico.ludwig@zion-networks.de?subject=Gitlab%20to%20Gitea%20migration%20script)>
## License
MIT License. For more details, refer to the [LICENSE](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE) file.
## Usage
```sh
pip install -r requirements.txt
python3 auto_mapper.py [options]
```
## Available script arguments
| Argument | Type | Description | Valid values |
| --------------- | --------------- | ----------------------------------------------- | ----------------------------------------------------------------- |
| `--host` | Key-Value | Specify the Gitea instance host | An IP address or hostname |
| `--port` | Key-Value | Specify the Gitea instance port | A valid port from 1 to 65535 |
| `--token` | Key-Value | Specify the Gitea instance token | A valid Gitea user token string |
| `--ssl` | Switch | Specify if the Gitea instance uses SSL | Enable SSL support (https) |
| `--debug` | Switch | Enable debug logging | Enable debug output |
| `--source-orga` | Key-Value | Specify the source organization | The name of the source organization |
| `--dry-run` | Switch | Enable dry-run mode, no changes will be made | Enable dry-run mode to prevent changes |
| `--exclude` | Multi Key-Value | Specify organizations to exclude | Can be used multiple times to exclude specific organizations |
| `--update` | Switch | Updates existing teams in target organizations | Enable to update already existing teams at target organizations |
| `--override` | Switch | Override existing teams in target organizations | Enable to override already existing teams at target organizations |
## Using environment variables
Note: Instead of using environment variables you can also make use of an `.env` file, which contains the respective environment variables.
| Argument | Type | Description | Valid values |
| ---------------- | --------------- | ----------------------------------------------- | ----------------------------------------------------------------- |
| `GITEA_INSTANCE` | Key-Value | Specify the Gitea instance host | An IP address or hostname |
| `GITEA_PORT` | Key-Value | Specify the Gitea instance port | A valid port from 1 to 65535 |
| `GITEA_TOKEN` | Key-Value | Specify the Gitea instance token | A valid Gitea user token string |
| `GITEA_SSL` | Switch | Specify if the Gitea instance uses SSL | Enable SSL support (https) |
| `DEBUG` | Switch | Enable debug logging | Enable debug output |
| `SOURCE_ORGA` | Key-Value | Specify the source organization | The name of the source organization |
| `DRY_RUN` | Switch | Enable dry-run mode, no changes will be made | Enable dry-run mode to prevent changes |
| `EXCLUDE_ORGAS` | Multi Key-Value | Specify organizations to exclude | Can be used multiple times to exclude specific organizations |
| `UPDATE_TEAMS` | Switch | Updates existing teams in target organizations | Enable to update already existing teams at target organizations |
| `OVERRIDE_TEAMS` | Switch | Override existing teams in target organizations | Enable to override already existing teams at target organizations |
## Using Dockerfile
**With .env file**
`docker run --rm -it $(docker build -q .)`
**With environment variables**
`docker run --rm -it -eGITEA_INSTANCE=localhost -eGITEA_PORT=3000 $(docker build -q .)`
Use `-e` arguments as shown in the example above for setting environment variables.
## Example
```sh
python3 auto_mapper.py \
--host "127.0.0.1" \
--port 3000 \
--token "your-secret-user-token" \
--source-orga "MyTemplateOrga"
```
For any issues, please refer to the contact details provided in the author's contact information section.

View File

@ -0,0 +1,272 @@
import argparse
import os
import socket
from gitea import *
from rich.console import Console
from rich.logging import RichHandler
from loguru import logger
from dotenv import load_dotenv
# fmt: off
load_dotenv()
def str_to_bool(value):
return value.lower() in ['true', '1', 'yes', 'y']
GITEA_INSTANCE : str = os.getenv("GITEA_INSTANCE") if os.getenv("GITEA_INSTANCE") != None else None
GITEA_PORT : int = os.getenv("GITEA_PORT") if os.getenv("GITEA_PORT") != None else 0
GITEA_TOKEN : str = os.getenv("GITEA_TOKEN") if os.getenv("GITEA_TOKEN") != None else None
GITEA_SSL : bool = str_to_bool(os.getenv("GITEA_SSL")) if os.getenv("GITEA_SSL") != None else False
DEBUG : bool = str_to_bool(os.getenv("DEBUG")) if os.getenv("DEBUG") != None else False
SOURCE_ORGA : str = os.getenv("SOURCE_ORGA") if os.getenv("SOURCE_ORGA") != None else None
DRY_RUN : bool = str_to_bool(os.getenv("DRY_RUN")) if os.getenv("DRY_RUN") != None else False
EXCLUDE_ORGAS : list = os.getenv("EXCLUDE_ORGAS").split(',') if os.getenv("EXCLUDE_ORGAS") != None else []
UPDATE_TEAMS : bool = str_to_bool(os.getenv("UPDATE_TEAMS")) if os.getenv("UPDATE_TEAMS") != None else False
OVERRIDE_TEAMS : bool = str_to_bool(os.getenv("OVERRIDE_TEAMS")) if os.getenv("OVERRIDE_TEAMS") != None else False
parser = argparse.ArgumentParser()
parser.add_argument("--host", help="Specify the Gitea instance host")
parser.add_argument("--port", help="Specify the Gitea instance port")
parser.add_argument("--token", help="Specify the Gitea instance token")
parser.add_argument("--ssl", help="Specify if the Gitea instance uses SSL", action="store_true")
parser.add_argument("--debug", help="Enable debug logging", action="store_true")
parser.add_argument("--source-orga", help="Specify the source organization")
parser.add_argument("--dry-run", help="Enable dry-run mode, no changes will be made", action="store_true")
parser.add_argument("--exclude", help="Specify organizations to exclude", nargs="+")
parser.add_argument("--update", help="Updates existing teams in target organizations", action="store_true")
parser.add_argument("--override", help="Override existing teams in target organizations", action="store_true")
args = parser.parse_args()
GITEA_INSTANCE : str = args.host if args.host else GITEA_INSTANCE
GITEA_PORT : int = args.port if args.port else GITEA_PORT
GITEA_TOKEN : str = args.token if args.token else GITEA_TOKEN
GITEA_SSL : bool = args.ssl if args.ssl else GITEA_SSL
DEBUG : bool = args.debug if args.debug else DEBUG
SOURCE_ORGA : str = args.source_orga if args.source_orga else SOURCE_ORGA
DRY_RUN : bool = args.dry_run if args.dry_run else DRY_RUN
EXCLUDE_ORGAS : list = args.exclude if args.exclude else EXCLUDE_ORGAS
UPDATE_TEAMS : bool = args.update if args.update else UPDATE_TEAMS
OVERRIDE_TEAMS : bool = args.override if args.override else OVERRIDE_TEAMS
# fmt: on
console = Console()
logger.remove() # Remove default handler
logger.add(
RichHandler(console=console, show_time=True, show_level=True, show_path=False),
format="{time:YYYY-MM-DD HH:mm:ss} - {message}",
level="DEBUG" if args.debug else "INFO",
)
# fmt: off
logger.info("Starting Gitea Auto Mapper")
logger.debug("Debug logging enabled")
logger.info(f"Dry-run mode: {'Enabled' if DRY_RUN else 'Disabled'}")
logger.info(f"Target Gitea instance: {GITEA_INSTANCE}:{GITEA_PORT}")
logger.info(f"Using SSL: {'Enabled' if GITEA_SSL else 'Disabled'}")
logger.info(f"Source organization: {SOURCE_ORGA}")
logger.info(f"Excluded organizations: {', '.join(EXCLUDE_ORGAS) if EXCLUDE_ORGAS else 'None'}")
logger.info(f"Update mode: {'Enabled' if UPDATE_TEAMS else 'Disabled'}")
logger.info(f"Override mode: {'Enabled' if OVERRIDE_TEAMS else 'Disabled'}")
# fmt: on
def check_host(host, port):
if not host:
raise Exception("Host not specified")
if not port:
port = 3000
logger.warning(f"Port not specified, defaulting to {port}")
try:
port = int(port)
if port < 1 or port > 65535:
raise ValueError("Invalid port number")
except ValueError:
raise Exception(f"{port} is not a valid port")
try:
socket.inet_aton(host)
except socket.error:
if host.startswith("http://"):
host = host[7:]
elif host.startswith("https://"):
host = host[8:]
else:
raise Exception(f"{host} is not a valid host")
try:
socket.gethostbyname(host)
except Exception as e:
raise Exception(f"{host} is not a valid host: {e}")
try:
logger.debug(f"Checking connection to {host}:{port}")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(5)
s.connect((host, port))
s.close()
except Exception as e:
raise Exception(f"Connection to {host}:{port} failed: {e}")
# Log with a spinner
with console.status("- Checking Gitea Endpoint", spinner="dots") as status:
try:
check_host(GITEA_INSTANCE, GITEA_PORT)
logger.info(f"Gitea endpoint at {GITEA_INSTANCE}:{GITEA_PORT} is valid")
except Exception as e:
logger.error(f"Failed to check Gitea endpoint: {e}")
logger.error(
"Please double check the Gitea endpoint and make sure it is reachable"
)
os._exit(1)
# Create a Gitea API client
gitea_client = None
with console.status("- Creating Gitea Client", spinner="dots") as status:
try:
if GITEA_SSL:
logger.debug("Using SSL")
gitea_client = Gitea(f"https://{GITEA_INSTANCE}:{GITEA_PORT}", GITEA_TOKEN)
else:
logger.debug("Not using SSL")
gitea_client = Gitea(f"http://{GITEA_INSTANCE}:{GITEA_PORT}", GITEA_TOKEN)
except Exception as e:
logger.error(f"Failed to create Gitea client: {e}")
if not gitea_client:
os._exit(1)
# Check if the token is valid
with console.status("- Checking Gitea Token", spinner="dots") as status:
try:
v = gitea_client.get_version()
u = gitea_client.get_user()
logger.info(f"Connected to Gitea {v}")
logger.info(f"Authenticated as {u.username}")
except Exception as e:
logger.error(f"Failed to check Gitea token: {e}")
# Get the source organization
source_orga = None
with console.status("- Getting Source Organization", spinner="dots") as status:
try:
source_orga = Organization.request(gitea_client, SOURCE_ORGA)
logger.info(f"Source organization is '{source_orga.name}'")
except Exception as e:
logger.error(f"Failed to get source organization: {e}")
if not source_orga:
os._exit(1)
# Get the source organization teams
source_orga_teams = None
with console.status("- Getting Source Organization Teams", spinner="dots") as status:
try:
source_orga_teams = source_orga.get_teams()
source_orga_teams = [
team for team in source_orga_teams if team.name != "Owners"
] # Skip the default team 'Owners'
logger.info(f"Source organization has {len(source_orga_teams)} teams:")
for team in source_orga_teams:
logger.info(f" - {team.name}")
logger.info("Note: The default team 'Owners' will always be skipped!")
except Exception as e:
logger.error(f"Failed to get source organization teams: {e}")
if not source_orga_teams:
os._exit(1)
# Get all other organizations except for the source organization
all_orgas = None
with console.status("- Getting All Organizations", spinner="dots") as status:
try:
all_orgas = gitea_client.get_orgs()
all_orgas = [orga for orga in all_orgas if orga.name != source_orga.name]
logger.info(f"Found {len(all_orgas)} other organizations:")
for orga in all_orgas:
logger.info(f" - {orga.name}")
logger.info(
f"Note: The source organization {source_orga.name} will always be skipped!"
)
except Exception as e:
logger.error(f"Failed to get all organizations: {e}")
if not all_orgas:
os._exit(1)
# Copy teams from source organization to all other organizations except for the source organization
with console.status("- Copying Teams", spinner="dots") as status:
if DRY_RUN:
logger.warning("Dry-run mode enabled, no changes will be made")
if OVERRIDE_TEAMS:
logger.info("Update mode enabled, existing teams will be updated")
for orga in all_orgas:
if orga.name in EXCLUDE_ORGAS:
logger.info(f"Skipping organization '{orga.name}'")
continue
logger.info(f"{source_orga.name} -> {orga.name}")
for team in source_orga_teams:
try:
# check if the team already exists in the target organization
orga_teams = orga.get_teams()
existing_team = next(
(t for t in orga_teams if t.name == team.name), None
)
if existing_team:
logger.debug(f"\tTeam {team.name} already exists in {orga.name}")
if OVERRIDE_TEAMS and existing_team:
logger.info(f"\tDeleting existing team '{team.name}'")
if not DRY_RUN:
existing_team.delete()
existing_team = None
if not DRY_RUN and not existing_team:
logger.info(f"\tCreating team '{team.name}'")
new_team = gitea_client.create_team(
org=orga,
name=team.name,
description=team.description,
permission="read",
includes_all_repositories=False,
can_create_org_repo=False,
units=[
"repo.code",
"repo.issues",
"repo.ext_issues",
"repo.wiki",
"repo.pulls",
"repo.releases",
"repo.ext_wiki",
"repo.actions",
"repo.projects",
],
units_map={
"repo.code": "none",
"repo.ext_issues": "none",
"repo.ext_wiki": "none",
"repo.issues": "none",
"repo.projects": "none",
"repo.pulls": "none",
"repo.releases": "none",
"repo.wiki": "none",
},
)
if not DRY_RUN and existing_team:
logger.info(f"\tUpdating existing team '{team.name}'")
existing_team.description = team.description
except Exception as e:
logger.error(f"Failed to copy team '{team.name}': {e}")

View File

@ -0,0 +1,9 @@
rich
loguru
tqdm
blessings
InquirerPy
pyfiglet
alive-progress
py-gitea
python-dotenv

View File

@ -29,12 +29,15 @@
# --include-issues Include issues repositories (default: False)
# --include-merge-requests Include merge requests repositories (default: False)
#
# --override-groups Override existing groups on Gitea (default: False) - not implemented yet
# --override-users Override existing users on Gitea (default: False) - not implemented yet
# --override-groups Override existing groups on Gitea (default: False)
# --override-users Override existing users on Gitea (default: False)
# --override-projects Override existing projects on Gitea (default: False)
#
# --skip-empty-groups Skip empty groups (default: False) - not implemented yet
# --skip-empty-projects Skip empty projects (default: False) - not implemented yet
# --skip-users [user1,user2,...] Skip specific users (default: None) - not implemented yet
# --skip-groups [group1,group2,...] Skip specific groups (default: None) - not implemented yet
# --skip-projects [project1,project2,...] Skip specific projects (default: None) - not implemented yet
#
# --only-groups Migrate only groups (default: False)
# --only-users Migrate only users (default: False)
@ -85,6 +88,12 @@ OVERRIDE_EXISTING_GROUPS = False
OVERRIDE_EXISTING_USERS = False
OVERRIDE_EXISTING_PROJECTS = False
SKIP_EMPTY_GROUPS = False
SKIP_EMPTY_PROJECTS = False
SKIP_USERS = []
SKIP_GROUPS = []
SKIP_PROJECTS = []
ONLY_GROUPS = False
ONLY_USERS = False
ONLY_PROJECTS = False
@ -154,6 +163,36 @@ GITEA_RESERVED_REPONAMES = [
"wiki",
]
# Runtime variables
# fmt: off
STATS = {
"users": {
"deleted": [],
"created": [],
"skipped": [],
"updated": [],
"errors": []
},
"groups": {
"deleted": [],
"created": [],
"skipped": [],
"updated": [],
"errors": []
},
"projects": {
"deleted": [],
"created": [],
"skipped": [],
"updated": [],
"errors": []
},
}
# fmt: on
# Imports
import os
@ -585,20 +624,27 @@ def _exception(exception, custom_message=None):
lineno = exc_tb.tb_lineno
formatted_traceback = traceback.format_exc()
# Prepare the exception message
exception_message = (f"{custom_message}\n" if custom_message else "") + (
f"\033[1m\033[31m[EXC]\033[0m {exception} "
# Prepare formatted and clear exception messages
formatted_exception_message = (
(f"{custom_message}\n" if custom_message else "")
+ f"\033[1m\033[31m[EXC]\033[0m {exception} "
f"(file: {filename}, line: {lineno})\n"
f"{formatted_traceback}\n"
)
# Print the exception message to the console
print(exception_message)
clear_exception_message = (
(f"{custom_message}\n" if custom_message else "")
+ f"[EXC] {exception} (file: {filename}, line: {lineno})\n"
+ formatted_traceback
)
# Write the exception message to the log file if defined
# Print the formatted exception message to the console
print(formatted_exception_message)
# Write the clear exception message to the log file if defined
if LOG_FILE:
with open(LOG_FILE, "a") as log_file:
log_file.write(f"[EXC] {exception_message}\n")
log_file.write(clear_exception_message + "\n")
# PROGRAM
@ -638,16 +684,20 @@ def gitlab2gitea_visibility(visibility: str) -> str:
def check_gitlab():
_debug(f"REQUEST: GET {GITLAB_URL}/api/{GITLAB_API_VERSION}/version")
response = requests.get(
f"{GITLAB_URL}/api/{GITLAB_API_VERSION}/version",
headers={
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": f"Bearer {GITLAB_TOKEN}",
},
)
try:
response = requests.get(
f"{GITLAB_URL}/api/{GITLAB_API_VERSION}/version",
headers={
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": f"Bearer {GITLAB_TOKEN}",
},
)
_trace(f"RESPONSE: {response.json()}")
_trace(f"RESPONSE: {response.json()}")
except Exception as e:
_exception(e, f"Failed to get GitLab version: {e}")
EXIT_REQUESTED = True
if response.status_code != 200:
response_message = (
@ -664,16 +714,20 @@ def check_gitlab():
def check_gitea():
_debug(f"REQUEST: GET {GITEA_URL}/api/{GITEA_API_VERSION}/version")
response = requests.get(
f"{GITEA_URL}/api/{GITEA_API_VERSION}/version",
headers={
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": f"token {GITEA_TOKEN}",
},
)
try:
response = requests.get(
f"{GITEA_URL}/api/{GITEA_API_VERSION}/version",
headers={
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": f"token {GITEA_TOKEN}",
},
)
_trace(f"RESPONSE: {response.json()}")
_trace(f"RESPONSE: {response.json()}")
except Exception as e:
_exception(e, f"Failed to get Gitea version: {e}")
EXIT_REQUESTED = True
if response.status_code != 200:
response_message = (
@ -867,10 +921,33 @@ def migrate_gitlab_project_to_gitea(gitlab_project: dict):
if "message" in response.json()
else "Unknown error"
)
STATS["projects"]["errors"].append(
{
"group": (
gitlab_project["namespace"]["path"]
if gitlab_project["namespace"]["kind"] == "group"
else gitlab_project["owner"]["username"]
),
"name": gitlab_project["path"],
"error": response_message if response_message else "Unknown error",
}
)
raise Exception(f"Failed to create Gitea project: {response_message}")
else:
project = response.json()
STATS["projects"]["created"].append(
{
"group": (
gitlab_project["namespace"]["path"]
if gitlab_project["namespace"]["kind"] == "group"
else gitlab_project["owner"]["username"]
),
"name": gitlab_project["path"],
}
)
return project
@ -911,6 +988,15 @@ def migrate_gitlab_user_to_gitea(user: dict):
if "message" in response.json()
else "Unknown error"
)
STATS["users"]["errors"].append(
{
"username": user["username"],
"error": response_message if response_message else "Unknown error",
"admin": user["is_admin"],
}
)
raise Exception(f"Failed to create Gitea user: {response_message}")
else:
user = response.json()
@ -920,6 +1006,14 @@ def migrate_gitlab_user_to_gitea(user: dict):
else:
_info(f'User "{user["username"]}" created on Gitea')
STATS["users"]["created"].append(
{
"username": user["username"],
"email": user["email"],
"admin": user["is_admin"],
}
)
return user
@ -1049,10 +1143,22 @@ def migrate_gitlab_group_to_gitea(gitlab_group: dict):
if "message" in response.json()
else "Unknown error"
)
STATS["groups"]["errors"].append(
{
"name": gitlab_group["path"],
"error": response_message if response_message else "Unknown error",
}
)
raise Exception(f"Failed to create Gitea group: {response_message}")
else:
group = response.json()
STATS["groups"]["created"].append(
{"name": gitlab_group["path"], "full_name": gitlab_group["full_name"]}
)
return group
@ -1497,6 +1603,9 @@ def create_missing_groups(gitlab_groups: list, gitea_groups: list):
if is_gitea_reserved_organame(name):
_warn(f'Skipping group "{name}": Group name is reserved on Gitea!')
STATS["groups"]["skipped"].append({"name": name, "reason": "Reserved"})
continue
if not exists:
@ -1505,8 +1614,22 @@ def create_missing_groups(gitlab_groups: list, gitea_groups: list):
try:
_info(f'Migrating Gitlab group "{name}" to Gitea...')
migrate_gitlab_group_to_gitea(gitlab_group)
_info(f'Group "{name}" created on Gitea')
except Exception as e:
_exception(f'Failed to create Gitea group "{name}": {e}', e)
else:
if OVERRIDE_EXISTING_GROUPS:
try:
_info(
f'Group "{name}" already exists on Gitea and will be overridden...'
)
delete_gitea_group(name)
_info(f'Group "{name}" deleted on Gitea')
_info(f'Creating missing group "{name}" on Gitea...')
migrate_gitlab_group_to_gitea(gitlab_group)
_info(f'Group "{name}" created on Gitea')
except Exception as e:
_exception(f'Failed to override Gitea group "{name}": {e}', e)
def create_missing_users(gitlab_users: list, gitea_users: list):
@ -1530,14 +1653,27 @@ def create_missing_users(gitlab_users: list, gitea_users: list):
_warn(
f'User "{name}" does not have an email address and will not be created!'
)
STATS["users"]["skipped"].append(
{"username": name, "reason": "No email address"}
)
continue
if gitea_user["username"] == None or gitea_user["username"] == "":
_warn(f'User "{name}" does not have a username and will not be created!')
STATS["users"]["skipped"].append(
{"username": name, "reason": "No username"}
)
continue
if is_gitea_reserved_username(name):
_warn(f'Skipping user "{name}": Username is reserved on Gitea!')
STATS["users"]["skipped"].append({"username": name, "reason": "Reserved"})
continue
if not exists:
@ -1546,8 +1682,22 @@ def create_missing_users(gitlab_users: list, gitea_users: list):
try:
_info(f'Migrating Gitlab user "{name}" to Gitea...')
migrate_gitlab_user_to_gitea(gitlab_user)
_info(f'User "{name}" created on Gitea')
except Exception as e:
_exception(f'Failed to create Gitea user "{name}": {e}', e)
else:
if OVERRIDE_EXISTING_USERS:
try:
_info(
f'User "{name}" already exists on Gitea and will be overridden...'
)
delete_gitea_user(name)
_info(f'User "{name}" deleted on Gitea')
_info(f'Creating missing user "{name}" on Gitea...')
migrate_gitlab_user_to_gitea(gitlab_user)
_info(f'User "{name}" created on Gitea')
except Exception as e:
_exception(f'Failed to override Gitea user "{name}": {e}', e)
def create_missing_projects(gitlab_projects: list, gitea_projects: list):
@ -1572,32 +1722,69 @@ def create_missing_projects(gitlab_projects: list, gitea_projects: list):
break
if is_gitea_reserved_reponame(name):
_warn(f'Skipping project "{name}": Project name is reserved on Gitea!')
_warn(
f'Skipping project "{group}/{name}": Project name is reserved on Gitea!'
)
STATS["projects"]["skipped"].append(
{"group": group, "name": name, "reason": "Reserved"}
)
continue
if is_gitlab_project_in_subgroup(gitlab_project):
_warn(
f'Skipping project "{name}": Project is in a subgroup and not supported by Gitea!'
f'Skipping project "{group}/{name}": Project is in a subgroup and not supported by Gitea!'
)
STATS["projects"]["skipped"].append(
{"group": group, "name": name, "reason": "Unsupported Subgroup"}
)
continue
if not exists:
_info(f'Creating missing project "{name}" on Gitea...')
_info(f'Creating missing project "{group}/{name}" on Gitea...')
try:
_info(f'Migrating Gitlab project "{name}" to Gitea...')
_info(f'Migrating Gitlab project "{group}/{name}" to Gitea...')
migrate_gitlab_project_to_gitea(gitlab_project)
_info(f'Project "{group}/{name}" created on Gitea')
if ONLY_ONE_PROJECT:
_warn("DEBUG MODE ENABLED - BREAKING AFTER FIRST PROJECT")
break
except Exception as e:
_exception(f'Failed to create Gitea project "{name}": {e}', e)
_exception(f'Failed to create Gitea project "{group}/{name}": {e}', e)
if ONLY_ONE_PROJECT:
_warn("DEBUG MODE ENABLED - BREAKING AFTER FIRST PROJECT")
break
else:
if OVERRIDE_EXISTING_PROJECTS:
try:
_info(
f'Project "{group}/{name}" already exists on Gitea and will be overridden...'
)
delete_gitea_project(gitea_project)
_info(f'Project "{group}/{name}" deleted on Gitea')
_info(f'Creating missing project "{group}/{name}" on Gitea...')
migrate_gitlab_project_to_gitea(gitlab_project)
_info(f'Project "{group}/{name}" created on Gitea')
if ONLY_ONE_PROJECT:
_warn("DEBUG MODE ENABLED - BREAKING AFTER FIRST PROJECT")
break
except Exception as e:
_exception(
f'Failed to override Gitea project "{group}/{name}": {e}', e
)
if ONLY_ONE_PROJECT:
_warn("DEBUG MODE ENABLED - BREAKING AFTER FIRST PROJECT")
break
def update_existing_groups(gitlab_groups: list, gitea_groups: list):
@ -1634,6 +1821,7 @@ def update_existing_groups(gitlab_groups: list, gitea_groups: list):
"full_name": gitlab_group["full_name"],
},
)
_info(f'Group "{name}" updated on Gitea')
except Exception as e:
_exception(f'Failed to update Gitea group "{name}": {e}', e)
@ -1673,6 +1861,7 @@ def update_existing_users(gitlab_users: list, gitea_users: list):
try:
_info(f'Updating existing user "{name}" on Gitea...')
update_gitea_user(gitlab_user)
_info(f'User "{name}" updated on Gitea')
except Exception as e:
_exception(f'Failed to update Gitea user "{name}": {e}', e)
else:
@ -1702,36 +1891,109 @@ def update_existing_projects(gitlab_projects: list, gitea_projects: list):
break
if is_gitea_reserved_reponame(name):
_warn(f'Skipping project "{name}": Project name is reserved on Gitea!')
_warn(
f'Skipping project "{group}/{name}": Project name is reserved on Gitea!'
)
continue
if is_gitlab_project_in_subgroup(gitlab_project):
_warn(
f'Skipping project "{name}": Project is in a subgroup and not supported by Gitea!'
f'Skipping project "{group}/{name}": Project is in a subgroup and not supported by Gitea!'
)
continue
if exists:
try:
_info(f'Updating existing project "{name}" on Gitea...')
if OVERRIDE_EXISTING_PROJECTS:
delete_gitea_project(gitea_project)
migrate_gitlab_project_to_gitea(gitlab_project)
else:
update_gitea_project(gitlab_project)
_info(f'Updating existing project "{group}/{name}" on Gitea...')
update_gitea_project(gitlab_project)
_info(f'Project "{group}/{name}" updated on Gitea')
if ONLY_ONE_PROJECT:
_warn("DEBUG MODE ENABLED - BREAKING AFTER FIRST PROJECT")
break
except Exception as e:
_exception(f'Failed to update Gitea project "{name}": {e}', e)
_exception(f'Failed to update Gitea project "{group}/{name}": {e}', e)
if ONLY_ONE_PROJECT:
_warn("DEBUG MODE ENABLED - BREAKING AFTER FIRST PROJECT")
break
else:
_warn(f'Project "{name}" does not exist on Gitea!')
_warn(f'Project "{group}/{name}" does not exist on Gitea!')
# ENDPOINT: DELETE /api/{GITEA_API_VERSION}/orgs/{org}
def delete_gitea_group(name: str):
_debug(f"REQUEST: DELETE {GITEA_URL}/api/{GITEA_API_VERSION}/orgs/{name}")
response = requests.delete(
f"{GITEA_URL}/api/{GITEA_API_VERSION}/orgs/{name}",
headers={
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": f"token {GITEA_TOKEN}",
},
)
if response.status_code != 204:
_trace(f"RESPONSE: {response.json()}")
response_message = (
response.json()["message"]
if "message" in response.json()
else "Unknown error"
)
STATS["groups"]["errors"].append(
{
"name": name,
"error": response_message if response_message else "Unknown error",
}
)
raise Exception(f"Failed to delete Gitea group: {response_message}")
else:
_info(f'Group "{name}" deleted on Gitea')
STATS["groups"]["deleted"].append({"name": name})
# ENDPOINT: DELETE /api/{GITEA_API_VERSION}/admin/users/{username}
def delete_gitea_user(username: str):
_debug(
f"REQUEST: DELETE {GITEA_URL}/api/{GITEA_API_VERSION}/admin/users/{username}"
)
response = requests.delete(
f"{GITEA_URL}/api/{GITEA_API_VERSION}/admin/users/{username}",
headers={
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": f"token {GITEA_TOKEN}",
},
)
if response.status_code != 204:
_trace(f"RESPONSE: {response.json()}")
response_message = (
response.json()["message"]
if "message" in response.json()
else "Unknown error"
)
STATS["users"]["errors"].append(
{
"username": username,
"error": response_message if response_message else "Unknown error",
}
)
raise Exception(f"Failed to delete Gitea user: {response_message}")
else:
_info(f'User "{username}" deleted on Gitea')
STATS["users"]["deleted"].append({"username": username})
# ENDPOINT: DELETE /api/{GITEA_API_VERSION}/repos/{owner}/{repo}
@ -1751,21 +2013,34 @@ def delete_gitea_project(project: dict):
},
)
_trace(f"RESPONSE: {response.json()}")
if response.status_code != 204:
_trace(f"RESPONSE: {response.json()}")
response_message = (
response.json()["message"]
if "message" in response.json()
else "Unknown error"
)
STATS["projects"]["errors"].append(
{
"group": owner,
"name": repo,
"error": response_message if response_message else "Unknown error",
}
)
raise Exception(f"Failed to delete Gitea project: {response_message}")
else:
_info(f'Project "{owner}/{repo}" deleted on Gitea')
STATS["projects"]["deleted"].append({"group": owner, "name": repo})
def migrate_groups():
if OVERRIDE_EXISTING_GROUPS:
_warn("EXISTING GROUPS WILL BE OVERRIDDEN!")
gitlab_groups = get_gitlab_groups()
gitea_groups = get_gitea_groups()
@ -1785,7 +2060,9 @@ def migrate_groups():
if missing_matches > 0:
_warn(f"{missing_matches} groups are missing on Gitea!")
if missing_matches > 0 and not NO_CREATE_MISSING_GROUPS:
if OVERRIDE_EXISTING_GROUPS or (
missing_matches > 0 and not NO_CREATE_MISSING_GROUPS
):
_info("Creating missing groups on Gitea...")
if not DRY_RUN:
@ -1821,6 +2098,10 @@ def migrate_groups():
def migrate_users():
if OVERRIDE_EXISTING_USERS:
_warn("EXISTING USERS WILL BE OVERRIDDEN!")
gitlab_users = get_gitlab_users()
gitea_users = get_gitea_users()
@ -1840,7 +2121,7 @@ def migrate_users():
if missing_matches > 0:
_warn(f"{missing_matches} users are missing on Gitea!")
if missing_matches > 0 and not NO_CREATE_MISSING_USERS:
if OVERRIDE_EXISTING_USERS or (missing_matches > 0 and not NO_CREATE_MISSING_USERS):
_info("Creating missing users on Gitea...")
if not DRY_RUN:
@ -1919,7 +2200,9 @@ def migrate_projects():
if missing_matches > 0:
_warn(f"{missing_matches} projects are missing on Gitea!")
if missing_matches > 0 and not NO_CREATE_MISSING_PROJECTS:
if OVERRIDE_EXISTING_PROJECTS or (
missing_matches > 0 and not NO_CREATE_MISSING_PROJECTS
):
_info("Creating missing projects on Gitea...")
if not DRY_RUN:
@ -1998,6 +2281,48 @@ def run_migration():
_info("Project migration completed!")
def print_stats():
_info("Migration Statistics:")
_info("")
_info("Groups:")
_info(f" - Created: {len(STATS['groups']['created'])}")
_info(f" - Updated: {len(STATS['groups']['updated'])}")
_info(f" - Deleted: {len(STATS['groups']['deleted'])}")
_info(f" - Skipped: {len(STATS['groups']['skipped'])}")
_info(f" - Errors: {len(STATS['groups']['errors'])}")
_info("")
_info("Errors:")
for error in STATS["groups"]["errors"]:
_error(f' - Group "{error["name"]}": {error["error"]}')
_info("Users:")
_info(f" - Created: {len(STATS['users']['created'])}")
_info(f" - Updated: {len(STATS['users']['updated'])}")
_info(f" - Deleted: {len(STATS['users']['deleted'])}")
_info(f" - Skipped: {len(STATS['users']['skipped'])}")
_info(f" - Errors: {len(STATS['users']['errors'])}")
_info("")
_info("Errors:")
for error in STATS["users"]["errors"]:
_error(f' - User "{error["username"]}": {error["error"]}')
_info("Projects:")
_info(f" - Created: {len(STATS['projects']['created'])}")
_info(f" - Updated: {len(STATS['projects']['updated'])}")
_info(f" - Deleted: {len(STATS['projects']['deleted'])}")
_info(f" - Skipped: {len(STATS['projects']['skipped'])}")
_info(f" - Errors: {len(STATS['projects']['errors'])}")
_info("")
_info("Errors:")
for error in STATS["projects"]["errors"]:
_error(f' - Project "{error["group"]}/{error["name"]}": {error["error"]}')
def main():
signal.signal(signal.SIGINT, signal_handler)
@ -2045,8 +2370,13 @@ def main():
try:
run_migration()
_info("Migration completed!")
_info("")
except Exception as e:
_exception(f"An error occurred: {e}", e)
finally:
print_stats()
def signal_handler(sig, frame):

60
openai/README.md Normal file
View File

@ -0,0 +1,60 @@
# OpenAI API / ChatGPT bash integration
The script `runai_alias.sh` can be used to provide the `runai` command in bash. The command enabled you to generate commands directly in your terminal.
## Requirements
This script only requires Python 3.7+ and the `openai` Python package to be installed.
Install Python and Pip on Debian/Ubuntu: `sudo apt install python3 python3-pip`
Install Python and Pip on CentOS/RHEL-based: `sudo yum install epel-release && sudo yum install python3-pip`
Install Python and Pip on ArchLinux: `sudo pacman -Sy && sudo pacman -S python-pip`
Install Python and Pip in Fedora: `sudo dnf install python3-pip`
Install via pip: `pip install openai`
## Installing as alias command
**Note:** To keep the alias command working, the script must remain at the same location after setting the alias command.
Install the alias command using `./runai.sh -a`
## Example Usage as Script
```bash
user@host:~$ ./runai.sh scan my primary local network for online hosts
The generated command is:
nmap -sn $(ip route | awk '/default/ {print $3}' | awk -F. '{print $1"."$2"."$3".0/24"}')
Do you want to execute the command? [Y/n]: y
...
```
You always have the choice whether the command shall be executed or not, unless you pass `-y` as first parameter. When the command is executed, you can also access it later using your history.
## Example Usage as Alias
```bash
user@host:~$ ./runai scan my primary local network for online hosts
The generated command is:
nmap -sn $(ip route | awk '/default/ {print $3}' | awk -F. '{print $1"."$2"."$3".0/24"}')
Do you want to execute the command? [Y/n]: y
...
```
## Example Usage with ALL_YES set
**WARNING:** Allowing the script to just execute the generated command can be potentially dangerous. Use with care!
```bash
user@host:~$ ./runai -a scan my primary local network for online hosts
Running the command without asking for confirmation!
Running the command:
nmap -sn $(ip route | awk '/default/ {print $3}' | awk -F. '{print $1"."$2"."$3".0/24"}')
...
```
**NOTE** that you should *never* provide any sensitive data with your prompt!

103
openai/runai.sh Executable file
View File

@ -0,0 +1,103 @@
#!/bin/bash
MODEL="gpt-4o" # gpt-4o, gpt-4, gpt-3.5-turbo, gpt-3.5, gpt-3, curie, babbage, davinci
TEMPERATURE=0.2 # 0.0 to 1.0 (higher is more creative)
MAX_TOKENS=3072 # 1 to 4096 (higher requires more API credits, but can be more accurate)
# The base prompt describing the rules the AI should follow
read -r -d '' BASE_PROMPT << EOM
Do not explain anything.
Only respond with a command that can be executed on the linux terminal, as long as a valid question was provided by the user.
If you can't generate a valid command, always respond with '<<CANCEL>>'.
If the user did not provide a valid question, respond with '<<CANCEL>>'.
Never use code tags.
Never ask the user for anything.
Always put everything in a single command, e.g. using pipes, semicolons or double ampersands.
Break up the command in multiple lines when using redirections, semicolons or double ampersands, but make sure the command can still be executed (e.g. by using a backslash).
Do not use any formatting except bash escaped newlines.
The question is:
EOM
# INTERNAL VARIABLES - DO NOT CHANGE
ALL_YES=false # If true, the AI will execute the command without asking for confirmation
# Check if the OPENAI_API_KEY environment variable is set
# Ignore if -a is passed as a parameter
if [[ "$1" != "-a" && -z "$OPENAI_API_KEY" ]]; then
OPENAI_API_KEY=$(printenv OPENAI_API_KEY)
if [ -z "$OPENAI_API_KEY" ]; then
# Check if the OPENAI_API_KEY is set in the ~/.openai file
if [ -f ~/.openai ]; then
source ~/.openai
fi
if [ -z "$OPENAI_API_KEY" ]; then
echo "Please set the OPENAI_API_KEY environment variable."
fi
fi
fi
# check if -y is passed as a parameter and set ALL_YES to true
if [ "$1" == "-y" ]; then
ALL_YES=true
shift
fi
function run_ai()
{
if [ -z "$*" ]; then
echo "Usage: runai <prompt>"
return
fi
# if ALL_YES is set to true, print an info message in red bold font
if [ $ALL_YES == true ]; then
echo -e "\033[1;31mRunning the command without asking for confirmation!\033[0m"
fi
echo -e -n "\e[38;5;12m"
command=$(OPENAI_API_KEY=$OPENAI_API_KEY openai api chat.completions.create -m gpt-4o -t 0.4 -M 3072 -g "user" "$BASE_PROMPT $*")
if [ "$command" == "<<CANCEL>>" ]; then
echo -e "\033[1;31mThe AI could not generate a valid command. Please try again with a different prompt.\033[0m"
return
fi
echo -e -n "\e[0m"
if [ $ALL_YES == false ]; then
echo -e "The generated command is:\n\033[0;33m$command\n\033[1;31m"
read -p "Do you want to execute the command? [Y/n]: " input
input=$(echo "$input" | tr '"'"'[A-Z]'"'"' '"'"'[a-z]'"'"')
else
echo -e "Running the command:\n\033[0;33m$command\n\033[1;31m"
fi
if [[ $ALL_YES == true || $input == "y" || $input == "yes" ]]; then
echo -e "\033[0m";
history -s "runai $*"
history -s "$command"
eval "$command"
fi
}
# if the parameter -a is passed, set the alias, otherwise run the AI
if [ "$1" == "-a" ]; then
# check first, if an alias with the same name already exists
if grep -q "alias runai=" ~/.bashrc; then
echo "The alias runai already exists."
return
else
echo "alias runai='source $(realpath $0)'" >> ~/.bashrc
echo "The alias runai has been set. You can now run the AI by typing runai <command>."
# source the .bashrc file to make the alias available in the current shell
source ~/.bashrc
fi
else
run_ai "$*"
fi

128
proxmox/README.md Normal file
View File

@ -0,0 +1,128 @@
# 📦 Proxmox Container Update Script
A powerful utility script for managing updates across multiple Proxmox containers with flexible options and detailed logging.
## 🔍 Overview
`pct_update.sh` simplifies the maintenance of Proxmox containers by providing an easy way to check, update, and upgrade packages across multiple containers simultaneously. It supports various Linux distributions and offers extensive customization options for handling container updates efficiently.
## ✨ Features
- **Status Checking**: Quickly identify containers with pending updates
- **Package Management**: Update package caches or perform full system upgrades
- **Selective Processing**: Include or exclude specific containers by ID
- **Container Control**: Boot non-running containers before operations
- **Safety Options**: Dry-run mode to test without making changes
- **Automation Support**: Non-interactive mode for scheduled tasks
- **Comprehensive Logging**: Color-coded logs with adjustable verbosity
- **OS Support**: Works with Debian, Ubuntu, Alpine, CentOS, Fedora, and RHEL
## 🚀 Installation
1. Clone the repository or download the script
2. Make the script executable:
```bash
chmod +x pct_update.sh
```
3. Run the script with appropriate options (see Usage)
Optionally you can save the script to `/usr/local/sbin` to make it available system wide:
```bash
sudo cp pct_update.sh /usr/local/sbin/pct_update
sudo chmod +x /usr/local/sbin/pct_update
```
## 🛠️ Usage
```bash
./pct_update.sh [options]
```
(or, if globally installed)
```bash
pct_update [options]
```
### ⚙️ Options
- `-s` : Get the package status of each container.
- `-u` : Update the package cache, but do not upgrade.
- `-U` : Perform upgrades on all containers.
- `-x <ID>` : Exclude a container by its ID (can be used multiple times).
- `-i <ID>` : Include only specific containers by their IDs (can be used multiple times).
- `-y` : Assume 'yes' for all operations (asks once).
- `-Y` : Assume 'yes' for all operations without asking (for automation).
- `-b` : Boot non-running containers before performing actions.
- `-B <ID>` : Boot a specific container by its ID (can be used multiple times).
- `-d` : Dry-run mode (simulate actions without making changes).
- `-v` : Enable verbose mode.
- `-h` : Print this help and exit.
## 💻 Prerequisites
Ensure the following commands are available on your system:
- `awk`
- `grep`
- `pct` (Proxmox Container Toolkit)
## 🚨 Warnings
- Running as root: The script will warn if executed as root. Ensure this is necessary for your operations.
- Conflicts: Avoid using both `-b` and `-B` options simultaneously, as they may lead to unintended behavior.
## 📋 Examples
- **Check package status for all containers**:
```bash
./pct_update.sh -s
```
- **Update package cache for specific containers**:
```bash
./pct_update.sh -u -i 101 -i 102
```
- **Upgrade all containers, excluding some**:
```bash
./pct_update.sh -U -x 103 -x 104
```
- **Dry-run an upgrade for all containers**:
```bash
./pct_update.sh -Ud
```
- **Boot a specific container before upgrade**:
```bash
./pct_update.sh -U -B 105
```
- **Update all containers without asking for confirmation**:
```bash
./pct_update.sh -YU
```
## 📝 Logging
The script provides detailed logging for each operation, categorized by:
- **DEBUG**: Detailed information for debugging.
- **INFO**: General information about the operations.
- **WARNING**: Potential issues that do not stop the script.
- **ERROR**: Critical issues that prevent operations.
## 🤝 Contributions
Contributions are welcome! Feel free to open issues or submit pull requests to improve this script.
## 📄 License
This project is licensed under the [MIT License](https://git.zion-networks.de/ZionNetworks/linux-bash-scripts/src/branch/main/LICENSE).
---
Happy updating! 🎉

510
proxmox/pct_update.sh Executable file
View File

@ -0,0 +1,510 @@
#!/bin/bash
# Define color variables for formatted output
COLOR_DEBUG="\033[0;36m" # Cyan
COLOR_INFO="\033[0;32m" # Green
COLOR_WARNING="\033[0;33m" # Yellow
COLOR_ERROR="\033[0;31m" # Red
COLOR_DRY_RUN="\033[0;35m" # Purple
COLOR_RESET="\033[0m" # Reset to default
# Color Codes for all colors
COLOR_BLACK="\033[0;30m"
COLOR_RED="\033[0;31m"
COLOR_GREEN="\033[0;32m"
COLOR_YELLOW="\033[0;33m"
COLOR_BLUE="\033[0;34m"
COLOR_PURPLE="\033[0;35m"
COLOR_CYAN="\033[0;36m"
COLOR_WHITE="\033[0;37m"
# Color Codes for all colors with background
COLOR_BLACK_BG="\033[0;40m"
COLOR_RED_BG="\033[0;41m"
COLOR_GREEN_BG="\033[0;42m"
COLOR_YELLOW_BG="\033[0;43m"
COLOR_BLUE_BG="\033[0;44m"
COLOR_PURPLE_BG="\033[0;45m"
COLOR_CYAN_BG="\033[0;46m"
COLOR_WHITE_BG="\033[0;47m"
# Formatting Codes
BOLD="\033[1m"
DIM="\033[2m"
UNDERLINED="\033[4m"
BLINK="\033[5m"
INVERTED="\033[7m"
HIDDEN="\033[8m"
# Combined Formatting Codes
BOLD_BLACK="\033[1;30m"
BOLD_RED="\033[1;31m"
BOLD_GREEN="\033[1;32m"
BOLD_YELLOW="\033[1;33m"
BOLD_BLUE="\033[1;34m"
BOLD_PURPLE="\033[1;35m"
BOLD_CYAN="\033[1;36m"
BOLD_WHITE="\033[1;37m"
DIM_BLACK="\033[2;30m"
DIM_RED="\033[2;31m"
DIM_GREEN="\033[2;32m"
DIM_YELLOW="\033[2;33m"
DIM_BLUE="\033[2;34m"
DIM_PURPLE="\033[2;35m"
DIM_CYAN="\033[2;36m"
DIM_WHITE="\033[2;37m"
UNDERLINED_BLACK="\033[4;30m"
UNDERLINED_RED="\033[4;31m"
UNDERLINED_GREEN="\033[4;32m"
UNDERLINED_YELLOW="\033[4;33m"
UNDERLINED_BLUE="\033[4;34m"
UNDERLINED_PURPLE="\033[4;35m"
UNDERLINED_CYAN="\033[4;36m"
UNDERLINED_WHITE="\033[4;37m"
BLINK_BLACK="\033[5;30m"
BLINK_RED="\033[5;31m"
BLINK_GREEN="\033[5;32m"
BLINK_YELLOW="\033[5;33m"
BLINK_BLUE="\033[5;34m"
BLINK_PURPLE="\033[5;35m"
BLINK_CYAN="\033[5;36m"
BLINK_WHITE="\033[5;37m"
INVERTED_BLACK="\033[7;30m"
INVERTED_RED="\033[7;31m"
INVERTED_GREEN="\033[7;32m"
INVERTED_YELLOW="\033[7;33m"
INVERTED_BLUE="\033[7;34m"
INVERTED_PURPLE="\033[7;35m"
INVERTED_CYAN="\033[7;36m"
INVERTED_WHITE="\033[7;37m"
HIDDEN_BLACK="\033[8;30m"
HIDDEN_RED="\033[8;31m"
HIDDEN_GREEN="\033[8;32m"
HIDDEN_YELLOW="\033[8;33m"
HIDDEN_BLUE="\033[8;34m"
HIDDEN_PURPLE="\033[8;35m"
HIDDEN_CYAN="\033[8;36m"
HIDDEN_WHITE="\033[8;37m"
BOLD_DIM_BLACK="\033[1;2;30m"
BOLD_DIM_RED="\033[1;2;31m"
BOLD_DIM_GREEN="\033[1;2;32m"
BOLD_DIM_YELLOW="\033[1;2;33m"
BOLD_DIM_BLUE="\033[1;2;34m"
BOLD_DIM_PURPLE="\033[1;2;35m"
BOLD_DIM_CYAN="\033[1;2;36m"
BOLD_DIM_WHITE="\033[1;2;37m"
BOLD_UNDERLINED_BLACK="\033[1;4;30m"
BOLD_UNDERLINED_RED="\033[1;4;31m"
BOLD_UNDERLINED_GREEN="\033[1;4;32m"
BOLD_UNDERLINED_YELLOW="\033[1;4;33m"
BOLD_UNDERLINED_BLUE="\033[1;4;34m"
BOLD_UNDERLINED_PURPLE="\033[1;4;35m"
BOLD_UNDERLINED_CYAN="\033[1;4;36m"
BOLD_UNDERLINED_WHITE="\033[1;4;37m"
BOLD_BLINK_BLACK="\033[1;5;30m"
BOLD_BLINK_RED="\033[1;5;31m"
BOLD_BLINK_GREEN="\033[1;5;32m"
BOLD_BLINK_YELLOW="\033[1;5;33m"
BOLD_BLINK_BLUE="\033[1;5;34m"
BOLD_BLINK_PURPLE="\033[1;5;35m"
BOLD_BLINK_CYAN="\033[1;5;36m"
BOLD_BLINK_WHITE="\033[1;5;37m"
DIM_UNDERLINED_BLACK="\033[2;4;30m"
DIM_UNDERLINED_RED="\033[2;4;31m"
DIM_UNDERLINED_GREEN="\033[2;4;32m"
DIM_UNDERLINED_YELLOW="\033[2;4;33m"
DIM_UNDERLINED_BLUE="\033[2;4;34m"
DIM_UNDERLINED_PURPLE="\033[2;4;35m"
DIM_UNDERLINED_CYAN="\033[2;4;36m"
DIM_UNDERLINED_WHITE="\033[2;4;37m"
DIM_BLINK_BLACK="\033[2;5;30m"
DIM_BLINK_RED="\033[2;5;31m"
DIM_BLINK_GREEN="\033[2;5;32m"
DIM_BLINK_YELLOW="\033[2;5;33m"
DIM_BLINK_BLUE="\033[2;5;34m"
DIM_BLINK_PURPLE="\033[2;5;35m"
DIM_BLINK_CYAN="\033[2;5;36m"
DIM_BLINK_WHITE="\033[2;5;37m"
BOLD_DIM_UNDERLINED_BLACK="\033[1;2;4;30m"
BOLD_DIM_UNDERLINED_RED="\033[1;2;4;31m"
BOLD_DIM_UNDERLINED_GREEN="\033[1;2;4;32m"
BOLD_DIM_UNDERLINED_YELLOW="\033[1;2;4;33m"
BOLD_DIM_UNDERLINED_BLUE="\033[1;2;4;34m"
BOLD_DIM_UNDERLINED_PURPLE="\033[1;2;4;35m"
BOLD_DIM_UNDERLINED_CYAN="\033[1;2;4;36m"
BOLD_DIM_UNDERLINED_WHITE="\033[1;2;4;37m"
# Function to print help
print_help() {
echo "Usage: $0 [options]"
echo "Options:"
echo " -s Get the package status of each container"
echo " -u Update the package cache, but do not upgrade"
echo " -U Perform upgrades on all containers"
echo " -x <ID> Exclude a container by its ID (can be used multiple times)"
echo " -i <ID> Include only specific containers by their IDs (can be used multiple times)"
echo " -y Assume 'yes' for all operations (asks once)"
echo " -Y Assume 'yes' for all operations without asking (for automation)"
echo " -b Boot non-running containers before performing actions"
echo " -B <ID> Boot a specific container by its ID (can be used multiple times)"
echo " -d Dry-run mode (simulate actions without making changes)"
echo " -v Enable verbose mode"
echo " -h Print this help and exit"
}
# Logging functions
log_dbg() { [ "$VERBOSE" = true ] && echo -e "${COLOR_DEBUG}[$(date '+%Y-%m-%d %H:%M:%S')][DEBUG] $1${COLOR_RESET}"; }
log_inf() { echo -e "${COLOR_INFO}[$(date '+%Y-%m-%d %H:%M:%S')][INFO] $1${COLOR_RESET}"; }
log_wrn() { echo -e "${COLOR_WARNING}[$(date '+%Y-%m-%d %H:%M:%S')][WARNING] $1${COLOR_RESET}"; }
log_err() { echo -e "${COLOR_ERROR}[$(date '+%Y-%m-%d %H:%M:%S')][ERROR] $1${COLOR_RESET}"; }
log_dry_run() { echo -e "${COLOR_DRY_RUN}[DRY-RUN] $1${COLOR_RESET}"; }
# Check for required commands
for cmd in awk grep pct; do
if ! command -v $cmd &> /dev/null; then
log_err "Required command $cmd is not available."
exit 1
fi
done
# Warn if running as root
if [ "$(id -u)" -eq 0 ]; then
log_wrn "Running as root. Ensure this is necessary for your operations."
fi
# Initialize variables
EXCLUDE_IDS=()
INCLUDE_IDS=()
BOOT_IDS=()
ACTION=""
ASSUME_YES=false
FORCE_YES=false
BOOT_CONTAINERS=false
DRY_RUN=false
VERBOSE=false
# Internal variables
STATUS_SUMMARIES=()
# Parse arguments
while getopts ":suUx:i:hyYbdvB:" opt; do
case ${opt} in
s|u|U)
if [[ -n "$ACTION" && "$ACTION" != "$opt" ]]; then
log_err "Multiple actions specified. Please specify only one of -s, -u, or -U."
exit 1
fi
ACTION="$opt"
;;
x) EXCLUDE_IDS+=("$OPTARG") ;;
i) INCLUDE_IDS+=("$OPTARG") ;;
y) ASSUME_YES=true ;;
Y) FORCE_YES=true ;;
b) BOOT_CONTAINERS=true ;;
B) BOOT_IDS+=("$OPTARG") ;;
d) DRY_RUN=true ;;
v) VERBOSE=true ;;
h) print_help; exit 0 ;;
\?) log_err "Invalid option: -$OPTARG"; exit 1 ;;
:) log_err "Option -$OPTARG requires an argument."; exit 1 ;;
esac
done
# Check for conflicts between -b and -B
if [ "$BOOT_CONTAINERS" = true ] && [ ${#BOOT_IDS[@]} -gt 0 ]; then
log_err "Conflict: Both -b and -B options are set. Use only one. -b will boot all containers, which may not be intended."
exit 1
fi
# Check if an action is specified
if [ -z "$ACTION" ]; then
log_err "No action specified. Use -h for help."
exit 1
fi
# Check for conflicts between include and exclude lists
for id in "${INCLUDE_IDS[@]}"; do
if [[ " ${EXCLUDE_IDS[@]} " =~ " $id " ]]; then
log_err "Container ID $id is both included and excluded. This is not allowed."
exit 1
fi
done
# Get list of all containers
log_inf "Retrieving list of containers..."
CONTAINERS=$(pct list | awk 'NR>1 {print $1}' | grep -E '^[0-9]+$')
log_dbg "Containers found: $CONTAINERS"
# Function to check if a container ID is in the exclude list
is_excluded() {
local id="$1"
for exclude_id in "${EXCLUDE_IDS[@]}"; do
if [ "$exclude_id" == "$id" ]; then
return 0
fi
done
return 1
}
# Function to check if a container ID is in the include list
is_included() {
local id="$1"
if [ ${#INCLUDE_IDS[@]} -eq 0 ]; then
return 0
fi
for include_id in "${INCLUDE_IDS[@]}"; do
if [ "$include_id" == "$id" ]; then
return 0
fi
done
return 1
}
# Function to check if a container ID is in the boot list
is_booted() {
local id="$1"
for boot_id in "${BOOT_IDS[@]}"; do
if [ "$boot_id" == "$id" ]; then
return 0
fi
done
return 1
}
# Function to confirm actions
confirm_action() {
local message="$1"
if $FORCE_YES; then
return 0
elif $ASSUME_YES; then
log_wrn "Assuming 'yes' for all operations. Proceed with caution."
return 0
fi
read -p "$message (y/n): " choice
case "$choice" in
y|Y ) return 0 ;;
* ) return 1 ;;
esac
}
# Function to validate container ID
validate_container_id() {
local id="$1"
if ! [[ "$id" =~ ^[0-9]+$ ]]; then
log_err "Invalid container ID: $id"
return 1
fi
return 0
}
# Function to check if a container ID exists
container_exists() {
local id="$1"
if pct list | awk 'NR>1 {print $1}' | grep -q "^$id$"; then
return 0
else
log_err "Container ID $id does not exist."
return 1
fi
}
# Function to check if there are enough resources to boot a container
can_boot_container() {
local id="$1"
# Example check: Ensure there is enough free memory (this is a placeholder and should be replaced with actual checks)
local required_memory=$(pct config "$id" | grep -i "memory" | awk '{print $2}')
local free_memory=$(free -m | awk '/^Mem:/{print $7}')
if [ "$free_memory" -lt "$required_memory" ]; then
log_err "Not enough memory to boot container $id. Required: $required_memory MB, Available: $free_memory MB."
return 1
fi
return 0
}
color() {
local color="$1"
local message="$2"
local previous_color="${3:-$COLOR_RESET}"
return "${color}$message${previous_color}"
}
# Filter containers based on include/exclude lists
FILTERED_CONTAINERS=()
for CTID in $CONTAINERS; do
if ! validate_container_id "$CTID"; then
continue
fi
if ! container_exists "$CTID"; then
continue
fi
if is_excluded "$CTID"; then
continue
fi
if is_included "$CTID"; then
FILTERED_CONTAINERS+=("$CTID")
fi
done
# Check if there are any containers to process
if [ ${#FILTERED_CONTAINERS[@]} -eq 0 ]; then
log_wrn "No containers match the specified criteria."
exit 0
fi
# Iterate over each filtered container
for CTID in "${FILTERED_CONTAINERS[@]}"; do
log_dbg "Processing container ID: $CTID"
# Check if the container is running
STATUS=$(pct status "$CTID" 2>/dev/null | awk '{print $2}')
if [ $? -ne 0 ]; then
log_err "Failed to retrieve status for container $CTID."
continue
fi
log_dbg "Container $CTID status: $STATUS"
if [ "$STATUS" != "running" ]; then
if $BOOT_CONTAINERS || is_booted "$CTID"; then
log_inf "Container $CTID is not running."
if $DRY_RUN; then
log_dry_run "Would boot container $CTID."
else
if can_boot_container "$CTID"; then
if confirm_action "Boot container $CTID?"; then
if pct start "$CTID"; then
log_inf "Container $CTID started."
else
log_err "Failed to start container $CTID."
continue
fi
else
log_wrn "Boot skipped for container $CTID."
continue
fi
else
log_wrn "Insufficient resources to boot container $CTID."
continue
fi
fi
else
log_wrn "Container $CTID is not running and will be skipped."
continue
fi
fi
# Get the OS of the container
OS=$(pct config "$CTID" | grep -i "ostype" | awk '{print $2}' | tr '[:upper:]' '[:lower:]')
CTNAME=$(pct config "$CTID" | grep -i "hostname" | awk '{print $2}')
log_dbg "Container $CTID OS: $OS, Hostname: $CTNAME"
# Determine package manager and commands based on OS
case "$OS" in
debian|ubuntu)
PKG_STATUS_CMD="apt list --upgradable 2>/dev/null | grep -v 'Listing' | wc -l"
UPDATE_CMD="apt update"
UPGRADE_CMD="apt upgrade -y"
;;
alpine)
PKG_STATUS_CMD="apk version -l '<' | wc -l"
UPDATE_CMD="apk update"
UPGRADE_CMD="apk upgrade -y"
;;
centos|fedora|rhel)
if pct exec "$CTID" -- /bin/sh -c "command -v dnf" &> /dev/null; then
PKG_STATUS_CMD="dnf check-update | wc -l"
UPDATE_CMD="dnf makecache"
UPGRADE_CMD="dnf upgrade -y"
else
PKG_STATUS_CMD="yum check-update | wc -l"
UPDATE_CMD="yum makecache"
UPGRADE_CMD="yum upgrade -y"
fi
;;
*)
log_err "Unsupported OS for container $CTID: $OS"
continue
;;
esac
log_dbg "Package status command: $PKG_STATUS_CMD"
log_dbg "Update command: $UPDATE_CMD"
log_dbg "Upgrade command: $UPGRADE_CMD"
# Execute the appropriate action
case "$ACTION" in
s)
log_inf "Checking package status for container $CTNAME ($CTID)..."
if $DRY_RUN; then
log_dry_run "Would check package status for container $CTNAME ($CTID)."
else
UPDATES=$(pct exec "$CTID" -- /bin/sh -c "$PKG_STATUS_CMD")
if [ $? -ne 0 ]; then
log_err "Failed to check package status for container $CTNAME ($CTID)."
continue
fi
log_dbg "Updates available for container $CTNAME ($CTID): $UPDATES"
if [ "$UPDATES" -gt 0 ]; then
log_inf "Container $CTNAME ($CTID) STATUS: \033[1mHas Updates ($UPDATES)\033[0m"
else
log_dbg "Container $CTNAME ($CTID) STATUS: No Updates available"
fi
# Save the summary for later
STATUS_SUMMARIES+=("$CTNAME ($CTID): $UPDATES")
fi
;;
u)
log_inf "Preparing to update package cache for container $CTNAME ($CTID)..."
if $DRY_RUN; then
log_dry_run "Would execute update on $CTNAME ($CTID) with command: $UPDATE_CMD"
else
if confirm_action "Execute update on $CTNAME ($CTID) with command: $UPDATE_CMD"; then
if pct exec "$CTID" -- /bin/sh -c "$UPDATE_CMD"; then
log_inf "Update completed for $CTNAME ($CTID)"
else
log_err "Update failed for $CTNAME ($CTID)"
continue
fi
else
log_wrn "Update skipped for $CTNAME ($CTID)"
fi
fi
;;
U)
log_inf "Preparing to upgrade packages for container $CTNAME ($CTID)..."
if $DRY_RUN; then
log_dry_run "Would execute upgrade on $CTNAME ($CTID) with command: $UPDATE_CMD && $UPGRADE_CMD"
else
if confirm_action "Execute upgrade on $CTNAME ($CTID) with command: $UPDATE_CMD && $UPGRADE_CMD"; then
if pct exec "$CTID" -- /bin/sh -c "$UPDATE_CMD && $UPGRADE_CMD"; then
log_inf "Upgrade completed for $CTNAME ($CTID)"
else
log_err "Upgrade failed for $CTNAME ($CTID)"
continue
fi
else
log_wrn "Upgrade skipped for $CTNAME ($CTID)"
fi
fi
;;
esac
done
# Print summary of statuses
if [ "$ACTION" == "s" ] && [ ${#STATUS_SUMMARIES[@]} -gt 0 ]; then
log_inf "Summary of container statuses:"
for summary in "${STATUS_SUMMARIES[@]}"; do
log_inf "$summary"
done
fi