id
int64 0
458k
| file_name
stringlengths 4
119
| file_path
stringlengths 14
227
| content
stringlengths 24
9.96M
| size
int64 24
9.96M
| language
stringclasses 1
value | extension
stringclasses 14
values | total_lines
int64 1
219k
| avg_line_length
float64 2.52
4.63M
| max_line_length
int64 5
9.91M
| alphanum_fraction
float64 0
1
| repo_name
stringlengths 7
101
| repo_stars
int64 100
139k
| repo_forks
int64 0
26.4k
| repo_open_issues
int64 0
2.27k
| repo_license
stringclasses 12
values | repo_extraction_date
stringclasses 433
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,100
|
menu.py
|
cobbler_cobbler/cobbler/items/menu.py
|
"""
Cobbler module that contains the code for a Cobbler menu object.
Changelog:
V3.4.0 (unreleased):
* Changes:
* Constructor: ``kwargs`` can now be used to seed the item during creation.
* ``children``: The property was moved to the base class.
* ``parent``: The property was moved to the base class.
* ``from_dict()``: The method was moved to the base class.
V3.3.4 (unreleased):
* No changes
V3.3.3:
* Changed:
* ``check_if_valid()``: Now present in base class.
V3.3.2:
* No changes
V3.3.1:
* No changes
V3.3.0:
* Inital version of the item type.
* Added:
* display_name: str
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2021 Yuriy Chelpanov <yuriy.chelpanov@gmail.com>
import copy
from typing import TYPE_CHECKING, Any
from cobbler.decorator import LazyProperty
from cobbler.items.abstract.bootable_item import BootableItem
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Menu(BootableItem):
"""
A Cobbler menu object.
"""
TYPE_NAME = "menu"
COLLECTION_TYPE = "menu"
def __init__(self, api: "CobblerAPI", *args: Any, **kwargs: Any) -> None:
"""
Constructor
:param api: The Cobbler API object which is used for resolving information.
"""
super().__init__(api)
# Prevent attempts to clear the to_dict cache before the object is initialized.
self._has_initialized = False
self._display_name = ""
if len(kwargs) > 0:
self.from_dict(kwargs)
if not self._has_initialized:
self._has_initialized = True
#
# override some base class methods first (BootableItem)
#
def make_clone(self) -> "Menu":
"""
Clone this file object. Please manually adjust all value yourself to make the cloned object unique.
:return: The cloned instance of this object.
"""
_dict = copy.deepcopy(self.to_dict())
_dict.pop("uid", None)
return Menu(self.api, **_dict)
#
# specific methods for item.Menu
#
@LazyProperty
def display_name(self) -> str:
"""
Returns the display name.
:getter: Returns the display name for the boot menu.
:setter: Sets the display name for the boot menu.
"""
return self._display_name
@display_name.setter
def display_name(self, display_name: str) -> None:
"""
Setter for the display_name of the item.
:param display_name: The new display_name. If ``None`` the display_name will be set to an emtpy string.
"""
self._display_name = display_name
| 2,707
|
Python
|
.py
| 79
| 27.987342
| 111
| 0.638665
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,101
|
system.py
|
cobbler_cobbler/cobbler/items/system.py
|
"""
All code belonging to Cobbler systems.
Changelog (System):
V3.4.0 (unreleased):
* Added:
* ``display_name``: str
* Changes:
* Constructor: ``kwargs`` can now be used to seed the item during creation.
* ``from_dict()``: The method was moved to the base class.
* ``parent``: The property was moved to the base class.
V3.3.4 (unreleased):
* Changed:
* The network interface ``default`` is not created on object creation.
V3.3.3:
* Changed:
* ``boot_loaders``: Can now be set to ``<<inherit>>``
* ``next_server_v4``: Can now be set to ``<<inhertit>>``
* ``next_server_v6``: Can now be set to ``<<inhertit>>``
* ``virt_cpus``: Can now be set to ``<<inhertit>>``
* ``virt_file_size``: Can now be set to ``<<inhertit>>``
* ``virt_disk_driver``: Can now be set to ``<<inhertit>>``
* ``virt_auto_boot``: Can now be set to ``<<inhertit>>``
* ``virt_ram``: Can now be set to ``<<inhertit>>``
* ``virt_type``: Can now be set to ``<<inhertit>>``
* ``virt_path``: Can now be set to ``<<inhertit>>``
V3.3.2:
* No changes
V3.3.1:
* Changed:
* ``serial_device``: Default value is now ``-1``
V3.3.0:
* This release switched from pure attributes to properties (getters/setters).
* Added:
* ``next_server_v4``
* ``next_server_v6``
* Changed:
* ``virt_*``: Cannot be set to inherit anymore
* ``enable_gpxe``: Renamed to ``enable_ipxe``
* Removed:
* ``get_fields()``
* ``next_server`` - Please use one of ``next_server_v4`` or ``next_server_v6``
* ``set_boot_loader()`` - Moved to ``boot_loader`` property
* ``set_server()`` - Moved to ``server`` property
* ``set_next_server()`` - Moved to ``next_server`` property
* ``set_filename()`` - Moved to ``filename`` property
* ``set_proxy()`` - Moved to ``proxy`` property
* ``set_redhat_management_key()`` - Moved to ``redhat_management_key`` property
* ``get_redhat_management_key()`` - Moved to ``redhat_management_key`` property
* ``set_dhcp_tag()`` - Moved to ``NetworkInterface`` class property ``dhcp_tag``
* ``set_cnames()`` - Moved to ``NetworkInterface`` class property ``cnames``
* ``set_status()`` - Moved to ``status`` property
* ``set_static()`` - Moved to ``NetworkInterface`` class property ``static``
* ``set_management()`` - Moved to ``NetworkInterface`` class property ``management``
* ``set_dns_name()`` - Moved to ``NetworkInterface`` class property ``dns_name``
* ``set_hostname()`` - Moved to ``hostname`` property
* ``set_ip_address()`` - Moved to ``NetworkInterface`` class property ``ip_address``
* ``set_mac_address()`` - Moved to ``NetworkInterface`` class property ``mac_address``
* ``set_gateway()`` - Moved to ``gateway`` property
* ``set_name_servers()`` - Moved to ``name_servers`` property
* ``set_name_servers_search()`` - Moved to ``name_servers_search`` property
* ``set_netmask()`` - Moved to ``NetworkInterface`` class property ``netmask``
* ``set_if_gateway()`` - Moved to ``NetworkInterface`` class property ``if_gateway``
* ``set_virt_bridge()`` - Moved to ``NetworkInterface`` class property ``virt_bridge``
* ``set_interface_type()`` - Moved to ``NetworkInterface`` class property ``interface_type``
* ``set_interface_master()`` - Moved to ``NetworkInterface`` class property ``interface_master``
* ``set_bonding_opts()`` - Moved to ``NetworkInterface`` class property ``bonding_opts``
* ``set_bridge_opts()`` - Moved to ``NetworkInterface`` class property ``bridge_opts``
* ``set_ipv6_autoconfiguration()`` - Moved to ``ipv6_autoconfiguration`` property
* ``set_ipv6_default_device()`` - Moved to ``ipv6_default_device`` property
* ``set_ipv6_address()`` - Moved to ``NetworkInterface`` class property ``ipv6_address``
* ``set_ipv6_prefix()`` - Moved to ``NetworkInterface`` class property ``ipv6_prefix``
* ``set_ipv6_secondaries()`` - Moved to ``NetworkInterface`` class property ``ipv6_secondaries``
* ``set_ipv6_default_gateway()`` - Moved to ``NetworkInterface`` class property ``ipv6_default_gateway``
* ``set_ipv6_static_routes()`` - Moved to ``NetworkInterface`` class property ``ipv6_static_routes``
* ``set_ipv6_mtu()`` - Moved to ``NetworkInterface`` class property ``ipv6_mtu``
* ``set_mtu()`` - Moved to ``NetworkInterface`` class property ``mtu``
* ``set_connected_mode()`` - Moved to ``NetworkInterface`` class property ``connected_mode``
* ``set_enable_gpxe()`` - Moved to ``enable_gpxe`` property
* ``set_profile()`` - Moved to ``profile`` property
* ``set_image()`` - Moved to ``image`` property
* ``set_virt_cpus()`` - Moved to ``virt_cpus`` property
* ``set_virt_file_size()`` - Moved to ``virt_file_size`` property
* ``set_virt_disk_driver()`` - Moved to ``virt_disk_driver`` property
* ``set_virt_auto_boot()`` - Moved to ``virt_auto_boot`` property
* ``set_virt_pxe_boot()`` - Moved to ``virt_pxe_boot`` property
* ``set_virt_ram()`` - Moved to ``virt_ram`` property
* ``set_virt_type()`` - Moved to ``virt_type`` property
* ``set_virt_path()`` - Moved to ``virt_path`` property
* ``set_netboot_enabled()`` - Moved to ``netboot_enabled`` property
* ``set_autoinstall()`` - Moved to ``autoinstall`` property
* ``set_power_type()`` - Moved to ``power_type`` property
* ``set_power_identity_file()`` - Moved to ``power_identity_file`` property
* ``set_power_options()`` - Moved to ``power_options`` property
* ``set_power_user()`` - Moved to ``power_user`` property
* ``set_power_pass()`` - Moved to ``power_pass`` property
* ``set_power_address()`` - Moved to ``power_address`` property
* ``set_power_id()`` - Moved to ``power_id`` property
* ``set_repos_enabled()`` - Moved to ``repos_enabled`` property
* ``set_serial_device()`` - Moved to ``serial_device`` property
* ``set_serial_baud_rate()`` - Moved to ``serial_baud_rate`` property
V3.2.2:
* No changes
V3.2.1:
* Added:
* ``kickstart``: Resolves as a proxy to ``autoinstall``
V3.2.0:
* No changes
V3.1.2:
* Added:
* ``filename``: str - Inheritable
* ``set_filename()``
V3.1.1:
* No changes
V3.1.0:
* No changes
V3.0.1:
* File was moved from ``cobbler/item_system.py`` to ``cobbler/items/system.py``.
V3.0.0:
* Field definitions for network interfaces moved to own ``FIELDS`` array
* Added:
* ``boot_loader``: str - Inheritable
* ``next_server``: str - Inheritable
* ``power_options``: str
* ``power_identity_file``: str
* ``serial_device``: int
* ``serial_baud_rate``: int - One of "", "2400", "4800", "9600", "19200", "38400", "57600", "115200"
* ``set_next_server()``
* ``set_serial_device()``
* ``set_serial_baud_rate()``
* ``get_config_filename()``
* ``set_power_identity_file()``
* ``set_power_options()``
* Changed:
* ``kickstart``: Renamed to ``autoinstall``
* ``ks_meta``: Renamed to ``autoinstall_meta``
* ``from_datastruct``: Renamed to ``from_dict()``
* ``set_kickstart()``: Renamed to ``set_autoinstall()``
* Removed:
* ``redhat_management_server``
* ``set_ldap_enabled()``
* ``set_monit_enabled()``
* ``set_template_remote_kickstarts()``
* ``set_redhat_management_server()``
* ``set_name()``
V2.8.5:
* Inital tracking of changes for the changelog.
* Network interface defintions part of this class
* Added:
* ``name``: str
* ``uid``: str
* ``owners``: List[str] - Inheritable
* ``profile``: str - Name of the profile
* ``image``: str - Name of the image
* ``status``: str - One of "", "development", "testing", "acceptance", "production"
* ``kernel_options``: Dict[str, Any]
* ``kernel_options_post``: Dict[str, Any]
* ``ks_meta``: Dict[str, Any]
* ``enable_gpxe``: bool - Inheritable
* ``proxy``: str - Inheritable
* ``netboot_enabled``: bool
* ``kickstart``: str - Inheritable
* ``comment``: str
* ``depth``: int
* ``server``: str - Inheritable
* ``virt_path``: str - Inheritable
* ``virt_type``: str - Inheritable; One of "xenpv", "xenfv", "qemu", "kvm", "vmware", "openvz"
* ``virt_cpus``: int - Inheritable
* ``virt_file_size``: float - Inheritable
* ``virt_disk_driver``: str - Inheritable; One of "<<inherit>>", "raw", "qcow", "qcow2", "aio", "vmdk", "qed"
* ``virt_ram``: int - Inheritable
* ``virt_auto_boot``: bool - Inheritable
* ``virt_pxe_boot``: bool
* ``ctime``: float
* ``mtime``: float
* ``power_type``: str - Default loaded from settings key ``power_management_default_type``
* ``power_address``: str
* ``power_user``: str
* ``power_pass``: str
* ``power_id``: str
* ``hostname``: str
* ``gateway``: str
* ``name_servers``: List[str]
* ``name_servers_search``: List[str]
* ``ipv6_default_device``: str
* ``ipv6_autoconfiguration``: bool
* ``mgmt_classes``: List[Any] - Inheritable
* ``mgmt_parameters``: str - Inheritable
* ``boot_files``: Dict[str, Any]/List (Not reverse engineeriable) - Inheritable
* ``fetchable_files``: Dict[str, Any] - Inheritable
* ``template_files``: Dict[str, Any] - Inheritable
* ``redhat_management_key``: str - Inheritable
* ``redhat_management_server``: str - Inheritable
* ``template_remote_kickstarts``: bool - Default loaded from settings key ``template_remote_kickstarts``
* ``repos_enabled``: bool
* ``ldap_enabled``: - bool
* ``ldap_type``: str - Default loaded from settings key ``ldap_management_default_type``
* ``monit_enabled``: bool
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2008, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import copy
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
from cobbler import autoinstall_manager, enums, power_manager, utils, validate
from cobbler.cexceptions import CX
from cobbler.decorator import InheritableProperty, LazyProperty
from cobbler.items.abstract.bootable_item import BootableItem
from cobbler.items.network_interface import NetworkInterface
from cobbler.utils import filesystem_helpers, input_converters
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class System(BootableItem):
"""
A Cobbler system object.
"""
# Constants
TYPE_NAME = "system"
COLLECTION_TYPE = "system"
def __init__(self, api: "CobblerAPI", *args: Any, **kwargs: Any) -> None:
"""
Constructor
:param api: The Cobbler API
"""
super().__init__(api)
# Prevent attempts to clear the to_dict cache before the object is initialized.
self._has_initialized = False
self._interfaces: Dict[str, NetworkInterface] = {}
self._ipv6_autoconfiguration = False
self._repos_enabled = False
self._autoinstall = enums.VALUE_INHERITED
self._boot_loaders: Union[List[str], str] = enums.VALUE_INHERITED
self._enable_ipxe: Union[bool, str] = enums.VALUE_INHERITED
self._gateway = ""
self._hostname = ""
self._image = ""
self._ipv6_default_device = ""
self._name_servers = []
self._name_servers_search = []
self._netboot_enabled = False
self._next_server_v4 = enums.VALUE_INHERITED
self._next_server_v6 = enums.VALUE_INHERITED
self._filename = enums.VALUE_INHERITED
self._power_address = ""
self._power_id = ""
self._power_pass = ""
self._power_type = ""
self._power_user = ""
self._power_options = ""
self._power_identity_file = ""
self._profile = ""
self._proxy = enums.VALUE_INHERITED
self._redhat_management_key = enums.VALUE_INHERITED
self._server = enums.VALUE_INHERITED
self._status = ""
self._virt_auto_boot: Union[bool, str] = enums.VALUE_INHERITED
self._virt_cpus: Union[int, str] = enums.VALUE_INHERITED
self._virt_disk_driver = enums.VirtDiskDrivers.INHERITED
self._virt_file_size: Union[float, str] = enums.VALUE_INHERITED
self._virt_path = enums.VALUE_INHERITED
self._virt_pxe_boot = False
self._virt_ram: Union[int, str] = enums.VALUE_INHERITED
self._virt_type = enums.VirtType.INHERITED
self._serial_device = -1
self._serial_baud_rate = enums.BaudRates.DISABLED
self._display_name = ""
# Overwrite defaults from bootable_item.py
self._owners = enums.VALUE_INHERITED
self._autoinstall_meta = enums.VALUE_INHERITED
self._kernel_options = enums.VALUE_INHERITED
self._kernel_options_post = enums.VALUE_INHERITED
if len(kwargs) > 0:
self.from_dict(kwargs)
if not self._has_initialized:
self._has_initialized = True
def __getattr__(self, name: str) -> Any:
if name == "kickstart":
return self.autoinstall
if name == "ks_meta":
return self.autoinstall_meta
raise AttributeError(f'Attribute "{name}" did not exist on object type System.')
#
# override some base class methods first (BootableItem)
#
def make_clone(self):
_dict = copy.deepcopy(self.to_dict())
_dict.pop("uid", None)
collection = self.api.systems()
# clear all these out to avoid DHCP/DNS conflicts
for interface in _dict["interfaces"].values():
if not collection.disabled_indexes["mac_address"]:
interface.pop("mac_address", None)
if not collection.disabled_indexes["ip_address"]:
interface.pop("ip_address", None)
if not collection.disabled_indexes["ipv6_address"]:
interface.pop("ipv6_address", None)
if not collection.disabled_indexes["dns_name"]:
interface.pop("dns_name", None)
return System(self.api, **_dict)
def check_if_valid(self):
"""
Checks if the current item passes logical validation.
:raises CX: In case name is missing. Additionally either image or profile is required.
"""
super().check_if_valid()
if not self.inmemory:
return
# System specific validation
if self.profile == "":
if self.image == "":
raise CX(
f"Error with system {self.name} - profile or image is required"
)
@BootableItem.name.setter
def name(self, name: str) -> None:
"""
The systems name.
:param name: system name string
"""
# We have defined the Getter in BaseItem. As such this linter error is incorrect.
BootableItem.name.fset(self, name) # type: ignore[reportOptionalCall]
for interface in self.interfaces.values():
interface.system_name = name
#
# specific methods for item.System
#
@LazyProperty
def interfaces(self) -> Dict[str, NetworkInterface]:
r"""
Represents all interfaces owned by the system.
:getter: The interfaces present. Has at least the ``default`` one.
:setter: Accepts not only the correct type but also a dict with dicts which will then be converted by the
setter.
"""
return self._interfaces
@interfaces.setter
def interfaces(self, value: Dict[str, Any]):
"""
This methods needs to be able to take a dictionary from ``make_clone()``
:param value: The new interfaces.
"""
if not isinstance(value, dict): # type: ignore
raise TypeError("interfaces must be of type dict")
dict_values = list(value.values())
collection = self.api.systems()
if all(isinstance(x, NetworkInterface) for x in dict_values):
for network_iface in value.values():
network_iface.system_name = self.name
collection.update_interfaces_indexes(self, value)
self._interfaces = value
return
if all(isinstance(x, dict) for x in dict_values):
for key in value:
network_iface = NetworkInterface(self.api, self.name)
network_iface.from_dict(value[key])
collection.update_interface_indexes(self, key, network_iface)
self._interfaces[key] = network_iface
return
raise ValueError(
"The values of the interfaces must be fully of type dict (one level with values) or "
"NetworkInterface objects"
)
def modify_interface(self, interface_values: Dict[str, Any]):
"""
Modifies a magic interface dictionary in the form of: {"macaddress-eth0" : "aa:bb:cc:dd:ee:ff"}
"""
for key in interface_values.keys():
(_, interface) = key.split("-", 1)
if interface not in self.interfaces:
self.__create_interface(interface)
self.interfaces[interface].modify_interface({key: interface_values[key]})
def delete_interface(self, name: Union[str, Dict[Any, Any]]) -> None:
"""
Used to remove an interface.
:raises TypeError: If the name of the interface is not of type str or dict.
"""
collection = self.api.systems()
if isinstance(name, str):
if not name:
return
if name in self.interfaces:
collection.remove_interface_from_indexes(self, name)
self.interfaces.pop(name)
return
if isinstance(name, dict):
interface_name = name.get("interface", "")
collection.remove_interface_from_indexes(self, interface_name)
self.interfaces.pop(interface_name)
return
raise TypeError("The name of the interface must be of type str or dict")
def rename_interface(self, old_name: str, new_name: str):
r"""
Used to rename an interface.
:raises TypeError: In case on of the params was not a ``str``.
:raises ValueError: In case the name for the old interface does not exist or the new name does.
"""
if not isinstance(old_name, str): # type: ignore
raise TypeError("The old_name of the interface must be of type str")
if not isinstance(new_name, str): # type: ignore
raise TypeError("The new_name of the interface must be of type str")
if old_name not in self.interfaces:
raise ValueError(f'Interface "{old_name}" does not exist')
if new_name in self.interfaces:
raise ValueError(f'Interface "{new_name}" already exists')
self.interfaces[new_name] = self.interfaces[old_name]
del self.interfaces[old_name]
@LazyProperty
def hostname(self) -> str:
"""
hostname property.
:getter: Returns the value for ``hostname``.
:setter: Sets the value for the property ``hostname``.
"""
return self._hostname
@hostname.setter
def hostname(self, value: str):
"""
Setter for the hostname of the System class.
:param value: The new hostname
"""
if not isinstance(value, str): # type: ignore
raise TypeError("Field hostname of object system needs to be of type str!")
self._hostname = value
@LazyProperty
def status(self) -> str:
"""
status property.
:getter: Returns the value for ``status``.
:setter: Sets the value for the property ``status``.
"""
return self._status
@status.setter
def status(self, status: str):
"""
Setter for the status of the System class.
:param status: The new system status.
"""
if not isinstance(status, str): # type: ignore
raise TypeError("Field status of object system needs to be of type str!")
self._status = status
@InheritableProperty
def boot_loaders(self) -> List[str]:
"""
boot_loaders property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``boot_loaders``.
:setter: Sets the value for the property ``boot_loaders``.
"""
return self._resolve("boot_loaders")
@boot_loaders.setter # type: ignore[no-redef]
def boot_loaders(self, boot_loaders: Union[str, List[str]]):
"""
Setter of the boot loaders.
:param boot_loaders: The boot loaders for the system.
:raises CX: This is risen in case the bootloaders set are not valid ones.
"""
if not isinstance(boot_loaders, (str, list)): # type: ignore
raise TypeError("The bootloaders need to be either a str or list")
if boot_loaders == enums.VALUE_INHERITED:
self._boot_loaders = enums.VALUE_INHERITED
return
if boot_loaders in ("", []):
self._boot_loaders = []
return
if isinstance(boot_loaders, str):
boot_loaders_split = input_converters.input_string_or_list(boot_loaders)
else:
boot_loaders_split = boot_loaders
parent = self.logical_parent
if parent is not None:
# This can only be an item type that has the boot loaders property
parent_boot_loaders: List[str] = parent.boot_loaders # type: ignore
else:
self.logger.warning(
'Parent of System "%s" could not be found for resolving the parent bootloaders.',
self.name,
)
parent_boot_loaders = []
if not set(boot_loaders_split).issubset(parent_boot_loaders):
raise CX(
f'Error with system "{self.name}" - not all boot_loaders are supported (given:'
f'"{str(boot_loaders_split)}"; supported: "{str(parent_boot_loaders)}")'
)
self._boot_loaders = boot_loaders_split
@InheritableProperty
def server(self) -> str:
"""
server property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``server``.
:setter: Sets the value for the property ``server``.
"""
return self._resolve("server")
@server.setter # type: ignore[no-redef]
def server(self, server: str):
"""
If a system can't reach the boot server at the value configured in settings
because it doesn't have the same name on it's subnet this is there for an override.
:param server: The new value for the ``server`` property.
:raises TypeError: In case server is no string.
"""
if not isinstance(server, str): # type: ignore
raise TypeError("Field server of object system needs to be of type str!")
if server == "":
server = enums.VALUE_INHERITED
self._server = server
@InheritableProperty
def next_server_v4(self) -> str:
"""
next_server_v4 property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``next_server_v4``.
:setter: Sets the value for the property ``next_server_v4``.
"""
return self._resolve("next_server_v4")
@next_server_v4.setter # type: ignore[no-redef]
def next_server_v4(self, server: str = ""):
"""
Setter for the IPv4 next server. See profile.py for more details.
:param server: The address of the IPv4 next server. Must be a string or ``enums.VALUE_INHERITED``.
:raises TypeError: In case server is no string.
"""
if not isinstance(server, str): # type: ignore
raise TypeError("next_server_v4 must be a string.")
if server == enums.VALUE_INHERITED:
self._next_server_v4 = enums.VALUE_INHERITED
else:
self._next_server_v4 = validate.ipv4_address(server)
@InheritableProperty
def next_server_v6(self) -> str:
"""
next_server_v6 property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``next_server_v6``.
:setter: Sets the value for the property ``next_server_v6``.
"""
return self._resolve("next_server_v6")
@next_server_v6.setter # type: ignore[no-redef]
def next_server_v6(self, server: str = ""):
"""
Setter for the IPv6 next server. See profile.py for more details.
:param server: The address of the IPv6 next server. Must be a string or ``enums.VALUE_INHERITED``.
:raises TypeError: In case server is no string.
"""
if not isinstance(server, str): # type: ignore
raise TypeError("next_server_v6 must be a string.")
if server == enums.VALUE_INHERITED:
self._next_server_v6 = enums.VALUE_INHERITED
else:
self._next_server_v6 = validate.ipv6_address(server)
@InheritableProperty
def filename(self) -> str:
"""
filename property.
:getter: Returns the value for ``filename``.
:setter: Sets the value for the property ``filename``.
"""
if self.image != "":
return ""
return self._resolve("filename")
@filename.setter # type: ignore[no-redef]
def filename(self, filename: str):
"""
Setter for the filename of the System class.
:param filename: The new value for the ``filename`` property.
:raises TypeError: In case filename is no string.
"""
if not isinstance(filename, str): # type: ignore
raise TypeError("Field filename of object system needs to be of type str!")
if not filename:
self._filename = enums.VALUE_INHERITED
else:
self._filename = filename.strip()
@InheritableProperty
def proxy(self) -> str:
"""
proxy property. This corresponds per default to the setting``proxy_url_int``.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``proxy``.
:setter: Sets the value for the property ``proxy``.
"""
if self.profile != "":
return self._resolve("proxy")
return self._resolve("proxy_url_int")
@proxy.setter # type: ignore[no-redef]
def proxy(self, proxy: str):
"""
Setter for the proxy of the System class.
:param proxy: The new value for the proxy.
:raises TypeError: In case proxy is no string.
"""
if not isinstance(proxy, str): # type: ignore
raise TypeError("Field proxy of object system needs to be of type str!")
self._proxy = proxy
@InheritableProperty
def redhat_management_key(self) -> str:
"""
redhat_management_key property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``redhat_management_key``.
:setter: Sets the value for the property ``redhat_management_key``.
"""
return self._resolve("redhat_management_key")
@redhat_management_key.setter # type: ignore[no-redef]
def redhat_management_key(self, management_key: str):
"""
Setter for the redhat_management_key of the System class.
:param management_key: The new value for the redhat management key
:raises TypeError: In case management_key is no string.
"""
if not isinstance(management_key, str): # type: ignore
raise TypeError(
"Field redhat_management_key of object system needs to be of type str!"
)
if management_key == "":
self._redhat_management_key = enums.VALUE_INHERITED
self._redhat_management_key = management_key
def get_mac_address(self, interface: str):
"""
Get the mac address, which may be implicit in the object name or explicit with --mac-address.
Use the explicit location first.
:param interface: The name of the interface to get the MAC of.
"""
intf = self.__get_interface(interface)
if intf.mac_address != "":
return intf.mac_address.strip()
return None
def get_ip_address(self, interface: str) -> str:
"""
Get the IP address for the given interface.
:param interface: The name of the interface to get the IP address of.
"""
intf = self.__get_interface(interface)
if intf.ip_address:
return intf.ip_address.strip()
return ""
def is_management_supported(self, cidr_ok: bool = True) -> bool:
"""
Can only add system PXE records if a MAC or IP address is available, else it's a koan only record.
:param cidr_ok: Deprecated parameter which is not used anymore.
"""
if self.name == "default":
return True
for interface in self.interfaces.values():
mac = interface.mac_address
ip_v4 = interface.ip_address
ip_v6 = interface.ipv6_address
if mac or ip_v4 or ip_v6:
return True
return False
def __create_interface(self, interface: str):
"""
Create or overwrites a network interface.
:param interface: The name of the interface
"""
self.interfaces[interface] = NetworkInterface(self.api, self.name)
def __get_interface(
self, interface_name: Optional[str] = "default"
) -> NetworkInterface:
"""
Tries to retrieve an interface and creates it in case the interface doesn't exist. If no name is given the
default interface is retrieved.
:param interface_name: The name of the interface. If ``None`` is given then ``default`` is used.
:raises TypeError: In case interface_name is no string.
:return: The requested interface.
"""
if interface_name is None:
interface_name = "default"
if not isinstance(interface_name, str): # type: ignore
raise TypeError("The name of an interface must always be of type str!")
if not interface_name:
interface_name = "default"
if interface_name not in self._interfaces:
self.__create_interface(interface_name)
return self._interfaces[interface_name]
@LazyProperty
def gateway(self):
"""
gateway property.
:getter: Returns the value for ``gateway``.
:setter: Sets the value for the property ``gateway``.
"""
return self._gateway
@gateway.setter
def gateway(self, gateway: str):
"""
Set a gateway IPv4 address.
:param gateway: IP address
:returns: True or CX
"""
self._gateway = validate.ipv4_address(gateway)
@InheritableProperty
def name_servers(self) -> List[str]:
"""
name_servers property.
FIXME: Differentiate between IPv4/6
:getter: Returns the value for ``name_servers``.
:setter: Sets the value for the property ``name_servers``.
"""
return self._resolve("name_servers")
@name_servers.setter
def name_servers(self, data: Union[str, List[str]]):
"""
Set the DNS servers.
FIXME: Differentiate between IPv4/6
:param data: string or list of nameservers
:returns: True or CX
"""
self._name_servers = validate.name_servers(data)
@LazyProperty
def name_servers_search(self) -> List[str]:
"""
name_servers_search property.
:getter: Returns the value for ``name_servers_search``.
:setter: Sets the value for the property ``name_servers_search``.
"""
return self._resolve("name_servers_search")
@name_servers_search.setter
def name_servers_search(self, data: Union[str, List[Any]]):
"""
Set the DNS search paths.
:param data: string or list of search domains
:returns: True or CX
"""
self._name_servers_search = validate.name_servers_search(data)
@LazyProperty
def ipv6_autoconfiguration(self) -> bool:
"""
ipv6_autoconfiguration property.
:getter: Returns the value for ``ipv6_autoconfiguration``.
:setter: Sets the value for the property ``ipv6_autoconfiguration``.
"""
return self._ipv6_autoconfiguration
@ipv6_autoconfiguration.setter
def ipv6_autoconfiguration(self, value: bool):
"""
Setter for the ipv6_autoconfiguration of the System class.
:param value: The new value for the ``ipv6_autoconfiguration`` property.
"""
value = input_converters.input_boolean(value)
if not isinstance(value, bool): # type: ignore
raise TypeError("ipv6_autoconfiguration needs to be of type bool")
self._ipv6_autoconfiguration = value
@LazyProperty
def ipv6_default_device(self) -> str:
"""
ipv6_default_device property.
:getter: Returns the value for ``ipv6_default_device``.
:setter: Sets the value for the property ``ipv6_default_device``.
"""
return self._ipv6_default_device
@ipv6_default_device.setter
def ipv6_default_device(self, interface_name: str):
"""
Setter for the ipv6_default_device of the System class.
:param interface_name: The new value for the ``ipv6_default_device`` property.
"""
if not isinstance(interface_name, str): # type: ignore
raise TypeError(
"Field ipv6_default_device of object system needs to be of type str!"
)
self._ipv6_default_device = interface_name
@InheritableProperty
def enable_ipxe(self) -> bool:
"""
enable_ipxe property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``enable_ipxe``.
:setter: Sets the value for the property ``enable_ipxe``.
"""
return self._resolve("enable_ipxe")
@enable_ipxe.setter # type: ignore[no-redef]
def enable_ipxe(self, enable_ipxe: Union[str, bool]):
"""
Sets whether the system will use iPXE for booting.
:param enable_ipxe: If ipxe should be enabled or not.
:raises TypeError: In case enable_ipxe is not a boolean.
"""
if enable_ipxe == enums.VALUE_INHERITED:
self._enable_ipxe = enums.VALUE_INHERITED
return
enable_ipxe = input_converters.input_boolean(enable_ipxe)
if not isinstance(enable_ipxe, bool): # type: ignore
raise TypeError("enable_ipxe needs to be of type bool")
self._enable_ipxe = enable_ipxe
@LazyProperty
def profile(self) -> str:
"""
profile property.
:getter: Returns the value for ``profile``.
:setter: Sets the value for the property ``profile``.
"""
return self._profile
@profile.setter
def profile(self, profile_name: str):
"""
Set the system to use a certain named profile. The profile must have already been loaded into the profiles
collection.
:param profile_name: The name of the profile which the system is underneath.
:raises TypeError: In case profile_name is no string.
:raises ValueError: In case profile_name does not exist.
"""
if not isinstance(profile_name, str): # type: ignore
raise TypeError("The name of a profile needs to be of type str.")
if profile_name in ["delete", "None", "~", ""]:
self._profile = ""
return
profile = self.api.profiles().find(name=profile_name, return_list=False)
if isinstance(profile, list):
raise ValueError("Search returned ambigous match!")
if profile is None:
raise ValueError(f'Profile with the name "{profile_name}" is not existing')
self.image = "" # mutual exclusion rule
self._profile = profile_name
self.depth = profile.depth + 1 # subprofiles have varying depths.
@LazyProperty
def image(self) -> str:
"""
image property.
:getter: Returns the value for ``image``.
:setter: Sets the value for the property ``image``.
"""
return self._image
@image.setter
def image(self, image_name: str):
"""
Set the system to use a certain named image. Works like ``set_profile()`` but cannot be used at the same time.
It's one or the other.
:param image_name: The name of the image which will act as a parent.
:raises ValueError: In case the image name was invalid.
:raises TypeError: In case image_name is no string.
"""
if not isinstance(image_name, str): # type: ignore
raise TypeError("The name of an image must be of type str.")
if image_name in ["delete", "None", "~", ""]:
self._image = ""
return
img = self.api.images().find(name=image_name)
if isinstance(img, list):
raise ValueError("Search returned ambigous match!")
if img is None:
raise ValueError(f'Image with the name "{image_name}" is not existing')
self.profile = "" # mutual exclusion rule
self._image = image_name
self.depth = img.depth + 1
@InheritableProperty
def virt_cpus(self) -> int:
"""
virt_cpus property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``virt_cpus``.
:setter: Sets the value for the property ``virt_cpus``.
"""
return self._resolve("virt_cpus")
@virt_cpus.setter # type: ignore[no-redef]
def virt_cpus(self, num: Union[int, str]):
"""
Setter for the virt_cpus of the System class.
:param num: The new value for the number of CPU cores.
"""
if num == enums.VALUE_INHERITED:
self._virt_cpus = enums.VALUE_INHERITED
return
self._virt_cpus = validate.validate_virt_cpus(num)
@InheritableProperty
def virt_file_size(self) -> float:
"""
virt_file_size property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``virt_file_size``.
:setter: Sets the value for the property ``virt_file_size``.
"""
return self._resolve("virt_file_size")
@virt_file_size.setter # type: ignore[no-redef]
def virt_file_size(self, num: float):
"""
Setter for the virt_file_size of the System class.
:param num:
"""
self._virt_file_size = validate.validate_virt_file_size(num)
@InheritableProperty
def virt_disk_driver(self) -> enums.VirtDiskDrivers:
"""
virt_disk_driver property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``virt_disk_driver``.
:setter: Sets the value for the property ``virt_disk_driver``.
"""
return self._resolve_enum("virt_disk_driver", enums.VirtDiskDrivers)
@virt_disk_driver.setter # type: ignore[no-redef]
def virt_disk_driver(self, driver: Union[str, enums.VirtDiskDrivers]):
"""
Setter for the virt_disk_driver of the System class.
:param driver: The new disk driver for the virtual disk.
"""
self._virt_disk_driver = enums.VirtDiskDrivers.to_enum(driver)
@InheritableProperty
def virt_auto_boot(self) -> bool:
"""
virt_auto_boot property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``virt_auto_boot``.
:setter: Sets the value for the property ``virt_auto_boot``.
"""
return self._resolve("virt_auto_boot")
@virt_auto_boot.setter # type: ignore[no-redef]
def virt_auto_boot(self, value: Union[bool, str]):
"""
Setter for the virt_auto_boot of the System class.
:param value: Weather the VM should automatically boot or not.
"""
if value == enums.VALUE_INHERITED:
self._virt_auto_boot = enums.VALUE_INHERITED
return
self._virt_auto_boot = validate.validate_virt_auto_boot(value)
@LazyProperty
def virt_pxe_boot(self) -> bool:
"""
virt_pxe_boot property.
:getter: Returns the value for ``virt_pxe_boot``.
:setter: Sets the value for the property ``virt_pxe_boot``.
"""
return self._virt_pxe_boot
@virt_pxe_boot.setter
def virt_pxe_boot(self, num: bool):
"""
Setter for the virt_pxe_boot of the System class.
:param num:
"""
self._virt_pxe_boot = validate.validate_virt_pxe_boot(num)
@InheritableProperty
def virt_ram(self) -> int:
"""
virt_ram property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``virt_ram``.
:setter: Sets the value for the property ``virt_ram``.
"""
return self._resolve("virt_ram")
@virt_ram.setter # type: ignore[no-redef]
def virt_ram(self, num: Union[int, str]):
"""
Setter for the virt_ram of the System class.
:param num:
"""
self._virt_ram = validate.validate_virt_ram(num)
@InheritableProperty
def virt_type(self) -> enums.VirtType:
"""
virt_type property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``virt_type``.
:setter: Sets the value for the property ``virt_type``.
"""
return self._resolve_enum("virt_type", enums.VirtType)
@virt_type.setter # type: ignore[no-redef]
def virt_type(self, vtype: Union[enums.VirtType, str]):
"""
Setter for the virt_type of the System class.
:param vtype: The new virtual type.
"""
self._virt_type = enums.VirtType.to_enum(vtype)
@InheritableProperty
def virt_path(self) -> str:
"""
virt_path property.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Returns the value for ``virt_path``.
:setter: Sets the value for the property ``virt_path``.
"""
return self._resolve("virt_path")
@virt_path.setter # type: ignore[no-redef]
def virt_path(self, path: str):
"""
Setter for the virt_path of the System class.
:param path: The new path.
"""
self._virt_path = validate.validate_virt_path(path, for_system=True)
@LazyProperty
def netboot_enabled(self) -> bool:
"""
netboot_enabled property.
:getter: Returns the value for ``netboot_enabled``.
:setter: Sets the value for the property ``netboot_enabled``.
"""
return self._netboot_enabled
@netboot_enabled.setter
def netboot_enabled(self, netboot_enabled: bool):
"""
If true, allows per-system PXE files to be generated on sync (or add). If false, these files are not generated,
thus eliminating the potential for an infinite install loop when systems are set to PXE boot first in the boot
order. In general, users who are PXE booting first in the boot order won't create system definitions, so this
feature primarily comes into play for programmatic users of the API, who want to initially create a system with
netboot enabled and then disable it after the system installs, as triggered by some action in automatic
installation file's %post section. For this reason, this option is not urfaced in the CLI, output, or
documentation (yet).
Use of this option does not affect the ability to use PXE menus. If an admin has machines set up to PXE only
after local boot fails, this option isn't even relevant.
:param: netboot_enabled:
:raises TypeError: In case netboot_enabled is not a boolean.
"""
netboot_enabled = input_converters.input_boolean(netboot_enabled)
if not isinstance(netboot_enabled, bool): # type: ignore
raise TypeError("netboot_enabled needs to be a bool")
self._netboot_enabled = netboot_enabled
@InheritableProperty
def autoinstall(self) -> str:
"""
autoinstall property.
:getter: Returns the value for ``autoinstall``.
:setter: Sets the value for the property ``autoinstall``.
"""
return self._resolve("autoinstall")
@autoinstall.setter # type: ignore[no-redef]
def autoinstall(self, autoinstall: str):
"""
Set the automatic installation template filepath, this must be a local file.
:param autoinstall: local automatic installation template file path
"""
autoinstall_mgr = autoinstall_manager.AutoInstallationManager(self.api)
self._autoinstall = autoinstall_mgr.validate_autoinstall_template_file_path(
autoinstall
)
@LazyProperty
def power_type(self) -> str:
"""
power_type property.
:getter: Returns the value for ``power_type``.
:setter: Sets the value for the property ``power_type``.
"""
return self._power_type
@power_type.setter
def power_type(self, power_type: str):
"""
Setter for the power_type of the System class.
:param power_type: The new value for the ``power_type`` property.
:raises TypeError: In case power_type is no string.
"""
if not isinstance(power_type, str): # type: ignore
raise TypeError("power_type must be of type str")
if not power_type:
self._power_type = ""
return
power_manager.validate_power_type(power_type)
self._power_type = power_type
@LazyProperty
def power_identity_file(self) -> str:
"""
power_identity_file property.
:getter: Returns the value for ``power_identity_file``.
:setter: Sets the value for the property ``power_identity_file``.
"""
return self._power_identity_file
@power_identity_file.setter
def power_identity_file(self, power_identity_file: str):
"""
Setter for the power_identity_file of the System class.
:param power_identity_file: The new value for the ``power_identity_file`` property.
:raises TypeError: In case power_identity_file is no string.
"""
if not isinstance(power_identity_file, str): # type: ignore
raise TypeError(
"Field power_identity_file of object system needs to be of type str!"
)
filesystem_helpers.safe_filter(power_identity_file)
self._power_identity_file = power_identity_file
@LazyProperty
def power_options(self) -> str:
"""
power_options property.
:getter: Returns the value for ``power_options``.
:setter: Sets the value for the property ``power_options``.
"""
return self._power_options
@power_options.setter
def power_options(self, power_options: str):
"""
Setter for the power_options of the System class.
:param power_options: The new value for the ``power_options`` property.
:raises TypeError: In case power_options is no string.
"""
if not isinstance(power_options, str): # type: ignore
raise TypeError(
"Field power_options of object system needs to be of type str!"
)
filesystem_helpers.safe_filter(power_options)
self._power_options = power_options
@LazyProperty
def power_user(self) -> str:
"""
power_user property.
:getter: Returns the value for ``power_user``.
:setter: Sets the value for the property ``power_user``.
"""
return self._power_user
@power_user.setter
def power_user(self, power_user: str):
"""
Setter for the power_user of the System class.
:param power_user: The new value for the ``power_user`` property.
:raises TypeError: In case power_user is no string.
"""
if not isinstance(power_user, str): # type: ignore
raise TypeError(
"Field power_user of object system needs to be of type str!"
)
filesystem_helpers.safe_filter(power_user)
self._power_user = power_user
@LazyProperty
def power_pass(self) -> str:
"""
power_pass property.
:getter: Returns the value for ``power_pass``.
:setter: Sets the value for the property ``power_pass``.
"""
return self._power_pass
@power_pass.setter
def power_pass(self, power_pass: str):
"""
Setter for the power_pass of the System class.
:param power_pass: The new value for the ``power_pass`` property.
:raises TypeError: In case power_pass is no string.
"""
if not isinstance(power_pass, str): # type: ignore
raise TypeError(
"Field power_pass of object system needs to be of type str!"
)
filesystem_helpers.safe_filter(power_pass)
self._power_pass = power_pass
@LazyProperty
def power_address(self) -> str:
"""
power_address property.
:getter: Returns the value for ``power_address``.
:setter: Sets the value for the property ``power_address``.
"""
return self._power_address
@power_address.setter
def power_address(self, power_address: str):
"""
Setter for the power_address of the System class.
:param power_address: The new value for the ``power_address`` property.
:raises TypeError: In case power_address is no string.
"""
if not isinstance(power_address, str): # type: ignore
raise TypeError(
"Field power_address of object system needs to be of type str!"
)
filesystem_helpers.safe_filter(power_address)
self._power_address = power_address
@LazyProperty
def power_id(self) -> str:
"""
power_id property.
:getter: Returns the value for ``power_id``.
:setter: Sets the value for the property ``power_id``.
"""
return self._power_id
@power_id.setter
def power_id(self, power_id: str):
"""
Setter for the power_id of the System class.
:param power_id: The new value for the ``power_id`` property.
:raises TypeError: In case power_id is no string.
"""
if not isinstance(power_id, str): # type: ignore
raise TypeError("Field power_id of object system needs to be of type str!")
filesystem_helpers.safe_filter(power_id)
self._power_id = power_id
@LazyProperty
def repos_enabled(self) -> bool:
"""
repos_enabled property.
:getter: Returns the value for ``repos_enabled``.
:setter: Sets the value for the property ``repos_enabled``.
"""
return self._repos_enabled
@repos_enabled.setter
def repos_enabled(self, repos_enabled: bool):
"""
Setter for the repos_enabled of the System class.
:param repos_enabled: The new value for the ``repos_enabled`` property.
:raises TypeError: In case is no string.
"""
repos_enabled = input_converters.input_boolean(repos_enabled)
if not isinstance(repos_enabled, bool): # type: ignore
raise TypeError(
"Field repos_enabled of object system needs to be of type bool!"
)
self._repos_enabled = repos_enabled
@LazyProperty
def serial_device(self) -> int:
"""
serial_device property. "-1" disables the serial device functionality completely.
:getter: Returns the value for ``serial_device``.
:setter: Sets the value for the property ``serial_device``.
"""
return self._serial_device
@serial_device.setter
def serial_device(self, device_number: int):
"""
Setter for the serial_device of the System class.
:param device_number: The number of the device which is going
"""
self._serial_device = validate.validate_serial_device(device_number)
@LazyProperty
def serial_baud_rate(self) -> enums.BaudRates:
"""
serial_baud_rate property. The value "disabled" will disable the functionality completely.
:getter: Returns the value for ``serial_baud_rate``.
:setter: Sets the value for the property ``serial_baud_rate``.
"""
return self._serial_baud_rate
@serial_baud_rate.setter
def serial_baud_rate(self, baud_rate: int):
"""
Setter for the serial_baud_rate of the System class.
:param baud_rate: The new value for the ``baud_rate`` property.
"""
self._serial_baud_rate = validate.validate_serial_baud_rate(baud_rate)
def get_config_filename(
self, interface: str, loader: Optional[str] = None
) -> Optional[str]:
"""
The configuration file for each system pxe uses is either a form of the MAC address or the hex version or the
IP address. If none of that is available, just use the given name, though the name given will be unsuitable for
PXE
configuration (For this, check system.is_management_supported()). This same file is used to store system config
information in the Apache tree, so it's still relevant.
:param interface: Name of the interface.
:param loader: Bootloader type.
"""
boot_loaders = self.boot_loaders
if loader is None:
if (
"grub" in boot_loaders or len(boot_loaders) < 1
): # pylint: disable=unsupported-membership-test
loader = "grub"
else:
loader = boot_loaders[0] # pylint: disable=unsubscriptable-object
if interface not in self.interfaces:
self.logger.warning(
'System "%s" did not have an interface with the name "%s" attached to it.',
self.name,
interface,
)
return None
if self.name == "default":
if loader == "grub":
return None
return "default"
mac = self.get_mac_address(interface)
ip_address = self.get_ip_address(interface)
if mac is not None and mac != "":
if loader == "grub":
return mac.lower()
return "01-" + "-".join(mac.split(":")).lower()
if ip_address != "":
return utils.get_host_ip(ip_address)
return self.name
@LazyProperty
def display_name(self) -> str:
"""
Returns the display name.
:getter: Returns the display name for the boot menu.
:setter: Sets the display name for the boot menu.
"""
return self._display_name
@display_name.setter
def display_name(self, display_name: str):
"""
Setter for the display_name of the item.
:param display_name: The new display_name. If ``None`` the display_name will be set to an emtpy string.
"""
self._display_name = display_name
| 56,401
|
Python
|
.py
| 1,274
| 35.305338
| 119
| 0.607348
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,102
|
image.py
|
cobbler_cobbler/cobbler/items/image.py
|
"""
Cobbler module that contains the code for a Cobbler image object.
Changelog:
V3.4.0 (unreleased):
* Added:
* ``display_name``
* Changed:
* Constructor: ``kwargs`` can now be used to seed the item during creation.
* ``autoinstall``: Restored inheritance of the property.
* ``children``: The proqperty was moved to the base class.
* ``from_dict()``: The method was moved to the base class.
* ``virt_disk_driver``: Restored inheritance of the property.
* ``virt_ram``: Restored inheritance of the property.
* ``virt_type``: Restored inheritance of the property.
* ``virt_bridge``: Restored inheritance of the property.
V3.3.4 (unreleased):
* No changes
V3.3.3:
* Added:
* ``children``
* Changes:
* ``virt_file_size``: Inherits from the settings again
* ``boot_loaders``: Inherits from the settings again
V3.3.2:
* No changes
V3.3.1:
* No changes
V3.3.0:
* This release switched from pure attributes to properties (getters/setters).
* Added:
* ``boot_loaders``: list
* ``menu``: str
* ``supported_boot_loaders``: list
* ``from_dict()``
* Moved to parent class (Item):
* ``ctime``: float
* ``mtime``: float
* ``depth``: int
* ``parent``: str
* ``uid``: str
* ``comment``: str
* ``name``: str
* Removed:
* ``get_fields()``
* ``get_parent()``
* ``set_arch()`` - Please use the ``arch`` property.
* ``set_autoinstall()`` - Please use the ``autoinstall`` property.
* ``set_file()`` - Please use the ``file`` property.
* ``set_os_version()`` - Please use the ``os_version`` property.
* ``set_breed()`` - Please use the ``breed`` property.
* ``set_image_type()`` - Please use the ``image_type`` property.
* ``set_virt_cpus()`` - Please use the ``virt_cpus`` property.
* ``set_network_count()`` - Please use the ``network_count`` property.
* ``set_virt_auto_boot()`` - Please use the ``virt_auto_boot`` property.
* ``set_virt_file_size()`` - Please use the ``virt_file_size`` property.
* ``set_virt_disk_driver()`` - Please use the ``virt_disk_driver`` property.
* ``set_virt_ram()`` - Please use the ``virt_ram`` property.
* ``set_virt_type()`` - Please use the ``virt_type`` property.
* ``set_virt_bridge()`` - Please use the ``virt_bridge`` property.
* ``set_virt_path()`` - Please use the ``virt_path`` property.
* ``get_valid_image_types()``
* Changes:
* ``arch``: str -> enums.Archs
* ``autoinstall``: str -> enums.VALUE_INHERITED
* ``image_type``: str -> enums.ImageTypes
* ``virt_auto_boot``: Union[bool, SETTINGS:virt_auto_boot] -> bool
* ``virt_bridge``: Union[str, SETTINGS:default_virt_bridge] -> str
* ``virt_disk_driver``: Union[str, SETTINGS:default_virt_disk_driver] -> enums.VirtDiskDrivers
* ``virt_file_size``: Union[float, SETTINGS:default_virt_file_size] -> float
* ``virt_ram``: Union[int, SETTINGS:default_virt_ram] -> int
* ``virt_type``: Union[str, SETTINGS:default_virt_type] -> enums.VirtType
V3.2.2:
* No changes
V3.2.1:
* Added:
* ``kickstart``: Resolves as a proxy to ``autoinstall``
V3.2.0:
* No changes
V3.1.2:
* No changes
V3.1.1:
* No changes
V3.1.0:
* No changes
V3.0.1:
* No changes
V3.0.0:
* Added:
* ``set_autoinstall()``
* Changes:
* Rename: ``kickstart`` -> ``autoinstall``
* Removed:
* ``set_kickstart()`` - Please use ``set_autoinstall()``
V2.8.5:
* Inital tracking of changes for the changelog.
* Added:
* ``ctime``: float
* ``depth``: int
* ``mtime``: float
* ``parent``: str
* ``uid``: str
* ``arch``: str
* ``kickstart``: str
* ``breed``: str
* ``comment``: str
* ``file``: str
* ``image_type``: str
* ``name``: str
* ``network_count``: int
* ``os_version``: str
* ``owners``: Union[list, SETTINGS:default_ownership]
* ``virt_auto_boot``: Union[bool, SETTINGS:virt_auto_boot]
* ``virt_bridge``: Union[str, SETTINGS:default_virt_bridge]
* ``virt_cpus``: int
* ``virt_disk_driver``: Union[str, SETTINGS:default_virt_disk_driver]
* ``virt_file_size``: Union[float, SETTINGS:default_virt_file_size]
* ``virt_path``: str
* ``virt_ram``: Union[int, SETTINGS:default_virt_ram]
* ``virt_type``: Union[str, SETTINGS:default_virt_type]
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import copy
from typing import TYPE_CHECKING, Any, List, Union
from cobbler import autoinstall_manager, enums, validate
from cobbler.cexceptions import CX
from cobbler.decorator import InheritableProperty, LazyProperty
from cobbler.items.abstract.bootable_item import BootableItem
from cobbler.utils import input_converters, signatures
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Image(BootableItem):
"""
A Cobbler Image. Tracks a virtual or physical image, as opposed to a answer file (autoinst) led installation.
"""
TYPE_NAME = "image"
COLLECTION_TYPE = "image"
def __init__(self, api: "CobblerAPI", *args: Any, **kwargs: Any) -> None:
"""
Constructor
:param api: The Cobbler API object which is used for resolving information.
"""
super().__init__(api)
# Prevent attempts to clear the to_dict cache before the object is initialized.
self._has_initialized = False
self._arch = enums.Archs.X86_64
self._autoinstall = enums.VALUE_INHERITED
self._breed = ""
self._file = ""
self._image_type = enums.ImageTypes.DIRECT
self._network_count = 0
self._os_version = ""
self._supported_boot_loaders: List[str] = []
self._boot_loaders: Union[List[str], str] = enums.VALUE_INHERITED
self._menu = ""
self._display_name = ""
self._virt_auto_boot: Union[str, bool] = enums.VALUE_INHERITED
self._virt_bridge = enums.VALUE_INHERITED
self._virt_cpus = 1
self._virt_disk_driver: enums.VirtDiskDrivers = enums.VirtDiskDrivers.INHERITED
self._virt_file_size: Union[str, float] = enums.VALUE_INHERITED
self._virt_path = ""
self._virt_ram: Union[str, int] = enums.VALUE_INHERITED
self._virt_type: Union[str, enums.VirtType] = enums.VirtType.INHERITED
if len(kwargs):
self.from_dict(kwargs)
if not self._has_initialized:
self._has_initialized = True
def __getattr__(self, name: str):
if name == "kickstart":
return self.autoinstall
raise AttributeError(f'Attribute "{name}" did not exist on object type Image.')
#
# override some base class methods first (BootableItem)
#
def make_clone(self):
"""
Clone this image object. Please manually adjust all value yourself to make the cloned object unique.
:return: The cloned instance of this object.
"""
_dict = copy.deepcopy(self.to_dict())
_dict.pop("uid", None)
return Image(self.api, **_dict)
#
# specific methods for item.Image
#
@LazyProperty
def arch(self) -> enums.Archs:
"""
Represents the architecture the image has. If deployed to a physical host this should be enforced, a virtual
image may be deployed on a host with any architecture.
:getter: The current architecture. Default is ``X86_64``.
:setter: Should be of the enum type or str. May raise an exception in case the architecture is not known to
Cobbler.
"""
return self._arch
@arch.setter
def arch(self, arch: Union[str, enums.Archs]):
"""
The field is mainly relevant to PXE provisioning.
See comments for arch property in distro.py, this works the same.
:param arch: The new architecture to set.
"""
self._arch = enums.Archs.to_enum(arch)
@InheritableProperty
def autoinstall(self) -> str:
"""
Property for the automatic installation file path, this must be a local file.
It may not make sense for images to have automatic installation templates. It really doesn't. However if the
image type is 'iso' koan can create a virtual floppy and shove an answer file on it, to script an installation.
This may not be a automatic installation template per se, it might be a Windows answer file (SIF) etc.
This property can inherit from a parent. Which is actually the default value.
:getter: The path relative to the template directory.
:setter: The location of the template relative to the template base directory.
"""
return self._resolve("autoinstall")
@autoinstall.setter
def autoinstall(self, autoinstall: str):
"""
Set the automatic installation file path, this must be a local file.
:param autoinstall: local automatic installation template file path
"""
autoinstall_mgr = autoinstall_manager.AutoInstallationManager(self.api)
self._autoinstall = autoinstall_mgr.validate_autoinstall_template_file_path(
autoinstall
)
@LazyProperty
def file(self) -> str:
"""
Stores the image location. This should be accessible on all nodes that need to access it.
Format: can be one of the following:
* username:password@hostname:/path/to/the/filename.ext
* username@hostname:/path/to/the/filename.ext
* hostname:/path/to/the/filename.ext
* /path/to/the/filename.ext
:getter: The path to the image location or an emtpy string.
:setter: May raise a TypeError or SyntaxError in case the validation of the location fails.
"""
return self._file
@file.setter
def file(self, filename: str):
"""
The setter for the image location.
:param filename: The location where the image is stored.
:raises SyntaxError: In case a protocol was found.
"""
if not isinstance(filename, str): # type: ignore
raise TypeError("file must be of type str to be parsable.")
if not filename:
self._file = ""
return
# validate file location format
if filename.find("://") != -1:
raise SyntaxError(
"Invalid image file path location, it should not contain a protocol"
)
uri = filename
auth = ""
hostname = ""
if filename.find("@") != -1:
auth, filename = filename.split("@")
# extract the hostname
# 1. if we have a colon, then everything before it is a hostname
# 2. if we don't have a colon, there is no hostname
if filename.find(":") != -1:
hostname, filename = filename.split(":")
elif filename[0] != "/":
raise SyntaxError(f"invalid file: {filename}")
# raise an exception if we don't have a valid path
if len(filename) > 0 and filename[0] != "/":
raise SyntaxError(f"file contains an invalid path: {filename}")
if filename.find("/") != -1:
_, filename = filename.rsplit("/", 1)
if len(filename) == 0:
raise SyntaxError("missing filename")
if len(auth) > 0 and len(hostname) == 0:
raise SyntaxError(
"a hostname must be specified with authentication details"
)
self._file = uri
@LazyProperty
def os_version(self) -> str:
r"""
The operating system version which the image contains.
:getter: The sanitized operating system version.
:setter: Accepts a str which will be validated against the ``distro_signatures.json``.
"""
return self._os_version
@os_version.setter
def os_version(self, os_version: str):
"""
Set the operating system version with this setter.
:param os_version: This must be a valid OS-Version.
"""
self._os_version = validate.validate_os_version(os_version, self.breed)
@LazyProperty
def breed(self) -> str:
r"""
The operating system breed.
:getter: Returns the current breed.
:setter: When setting this it is validated against the ``distro_signatures.json`` file.
"""
return self._breed
@breed.setter
def breed(self, breed: str):
"""
Set the operating system breed with this setter.
:param breed: The breed of the operating system which is available in the image.
"""
self._breed = validate.validate_breed(breed)
@LazyProperty
def image_type(self) -> enums.ImageTypes:
"""
Indicates what type of image this is.
direct = something like "memdisk", physical only
iso = a bootable ISO that pxe's or can be used for virt installs, virtual only
virt-clone = a cloned virtual disk (FIXME: not yet supported), virtual only
memdisk = hdd image (physical only)
:getter: The enum type value of the image type.
:setter: Accepts str like and enum type values and raises a TypeError or ValueError in the case of a problem.
"""
return self._image_type
@image_type.setter
def image_type(self, image_type: Union[enums.ImageTypes, str]):
"""
The setter which accepts enum type or str type values. Latter ones will be automatically converted if possible.
:param image_type: One of the four options from above.
:raises TypeError: In case a disallowed type was found.
:raises ValueError: In case the conversion from str could not successfully executed.
"""
if not isinstance(image_type, (enums.ImageTypes, str)): # type: ignore
raise TypeError("image_type must be of type str or enum.ImageTypes")
if isinstance(image_type, str):
if not image_type:
# FIXME: Add None Image type
self._image_type = enums.ImageTypes.DIRECT
try:
image_type = enums.ImageTypes[image_type.upper()]
except KeyError as error:
raise ValueError(
f"image_type choices include: {list(map(str, enums.ImageTypes))}"
) from error
# str was converted now it must be an enum.ImageTypes
if not isinstance(image_type, enums.ImageTypes): # type: ignore
raise TypeError("image_type needs to be of type enums.ImageTypes")
if image_type not in enums.ImageTypes:
raise ValueError(
f"image type must be one of the following: {', '.join(list(map(str, enums.ImageTypes)))}"
)
self._image_type = image_type
@LazyProperty
def virt_cpus(self) -> int:
"""
The amount of vCPU cores used in case the image is being deployed on top of a VM host.
:getter: The cores used.
:setter: The new number of cores.
"""
return self._virt_cpus
@virt_cpus.setter
def virt_cpus(self, num: int):
"""
Setter for the number of virtual cpus.
:param num: The number of virtual cpu cores.
"""
self._virt_cpus = validate.validate_virt_cpus(num)
@LazyProperty
def network_count(self) -> int:
"""
Represents the number of virtual NICs this image has.
.. deprecated:: 3.3.0
This is nowhere used in the project and will be removed in a future release.
:getter: The number of networks.
:setter: Raises a ``TypeError`` in case the value is not an int.
"""
return self._network_count
@network_count.setter
def network_count(self, network_count: Union[int, str]):
"""
Setter for the number of networks.
:param network_count: If None or emtpy will be set to ``1``, otherwise the given integer value will be set.
:raises TypeError: In case the network_count was not of type int.
"""
if network_count is None or network_count == "": # type: ignore
network_count = 1
if not isinstance(network_count, int): # type: ignore
raise TypeError(
"Field network_count of object image needs to be of type int."
)
self._network_count = network_count
@InheritableProperty
def virt_auto_boot(self) -> bool:
r"""
Whether the VM should be booted when booting the host or not.
:getter: ``True`` means autoboot is enabled, otherwise VM is not booted automatically.
:setter: The new state for the property.
"""
return self._resolve("virt_auto_boot")
@virt_auto_boot.setter
def virt_auto_boot(self, num: Union[str, bool]):
"""
Setter for the virtual automatic boot option.
:param num: May be "0" (disabled) or "1" (enabled), will be converted to a real bool.
"""
self._virt_auto_boot = validate.validate_virt_auto_boot(num)
@InheritableProperty
def virt_file_size(self) -> float:
r"""
The size of the image and thus the usable size for the guest.
.. warning:: There is a regression which makes the usage of multiple disks not possible right now. This will be
fixed in a future release.
:getter: The size of the image(s) in GB.
:setter: The float with the new size in GB.
"""
return self._resolve("virt_file_size")
@virt_file_size.setter
def virt_file_size(self, num: float):
"""
Setter for the virtual file size of the image.
:param num: Is a non-negative integer (0 means default). Can also be a comma seperated list -- for usage with
multiple disks
"""
self._virt_file_size = validate.validate_virt_file_size(num)
@InheritableProperty
def virt_disk_driver(self) -> enums.VirtDiskDrivers:
"""
The type of disk driver used for storing the image.
:getter: The enum type representation of the disk driver.
:setter: May be a ``str`` with the name of the disk driver or from the enum type directly.
"""
return self._resolve_enum("virt_disk_driver", enums.VirtDiskDrivers)
@virt_disk_driver.setter
def virt_disk_driver(self, driver: enums.VirtDiskDrivers):
"""
Setter for the virtual disk driver.
:param driver: The virtual disk driver which will be set.
"""
self._virt_disk_driver = enums.VirtDiskDrivers.to_enum(driver)
@InheritableProperty
def virt_ram(self) -> int:
"""
The amount of RAM given to the guest in MB.
:getter: The amount of RAM currently assigned to the image.
:setter: The new amount of ram. Must be an integer.
"""
return self._resolve("virt_ram")
@virt_ram.setter
def virt_ram(self, num: int):
"""
Setter for the amount of virtual RAM the machine will have.
:param num: 0 tells Koan to just choose a reasonable default.
"""
self._virt_ram = validate.validate_virt_ram(num)
@InheritableProperty
def virt_type(self) -> enums.VirtType:
"""
The type of image used.
:getter: The value of the virtual machine.
:setter: May be of the enum type or a str which is then converted to the enum type.
"""
return self._resolve_enum("virt_type", enums.VirtType)
@virt_type.setter
def virt_type(self, vtype: enums.VirtType):
"""
Setter for the virtual type
:param vtype: May be one of "qemu", "kvm", "xenpv", "xenfv", "vmware", "vmwarew", "openvz" or "auto".
"""
self._virt_type = enums.VirtType.to_enum(vtype)
@InheritableProperty
def virt_bridge(self) -> str:
r"""
The name of the virtual bridge used for networking.
.. warning:: The new validation for the setter is not working. Thus the inheritance from the settings is broken.
:getter: The name of the bridge.
:setter: The new name of the bridge. If set to an empty ``str``, it will be taken from the settings.
"""
return self._resolve("virt_bridge")
@virt_bridge.setter
def virt_bridge(self, vbridge: str):
"""
Setter for the virtual bridge which is used.
:param vbridge: The name of the virtual bridge to use.
"""
self._virt_bridge = validate.validate_virt_bridge(vbridge)
@LazyProperty
def virt_path(self) -> str:
"""
Represents the location where the image for the VM is stored.
:getter: The path.
:setter: Is being validated for being a reasonable path. If yes is set, otherwise ignored.
"""
return self._virt_path
@virt_path.setter
def virt_path(self, path: str):
"""
Setter for the virtual path which is used.
:param path: The path to where the virtual image is stored.
"""
self._virt_path = validate.validate_virt_path(path)
@LazyProperty
def menu(self) -> str:
"""
Property to represent the menu which this image should be put into.
:getter: The name of the menu or an emtpy str.
:setter: Should only be the name of the menu not the object. May raise ``CX`` in case the menu does not exist.
"""
return self._menu
@menu.setter
def menu(self, menu: str):
"""
Setter for the menu property.
:param menu: The menu for the image.
:raises CX: In case the menu to be set could not be found.
"""
if menu and menu != "":
menu_list = self.api.menus()
if not menu_list.find(name=menu):
raise CX(f"menu {menu} not found")
self._menu = menu
@LazyProperty
def display_name(self) -> str:
"""
Returns the display name.
:getter: Returns the display name for the boot menu.
:setter: Sets the display name for the boot menu.
"""
return self._display_name
@display_name.setter
def display_name(self, display_name: str):
"""
Setter for the display_name of the item.
:param display_name: The new display_name. If ``None`` the display_name will be set to an emtpy string.
"""
self._display_name = display_name
@property
def supported_boot_loaders(self) -> List[str]:
"""
Read only property which represents the subset of settable bootloaders.
:getter: The bootloaders which are available for being set.
"""
if len(self._supported_boot_loaders) == 0:
self._supported_boot_loaders = signatures.get_supported_distro_boot_loaders(
self
)
return self._supported_boot_loaders
@InheritableProperty
def boot_loaders(self) -> List[str]:
"""
Represents the boot loaders which are able to boot this image.
:getter: The bootloaders. May be an emtpy list.
:setter: A list with the supported boot loaders for this image.
"""
if self._boot_loaders == enums.VALUE_INHERITED:
return self.supported_boot_loaders
# The following line is missleading for pyright since it doesn't understand
# that we use only a constant with str type.
return self._boot_loaders # type: ignore
@boot_loaders.setter # type: ignore[no-redef]
def boot_loaders(self, boot_loaders: Union[List[str], str]):
"""
Setter of the boot loaders.
:param boot_loaders: The boot loaders for the image.
:raises TypeError: In case this was of a not allowed type.
:raises ValueError: In case the str which contained the list could not be successfully split.
"""
# allow the magic inherit string to persist
if boot_loaders == enums.VALUE_INHERITED:
self._boot_loaders = enums.VALUE_INHERITED
return
if boot_loaders:
boot_loaders_split = input_converters.input_string_or_list(boot_loaders)
if not isinstance(boot_loaders_split, list):
raise TypeError("boot_loaders needs to be of type list!")
if not set(boot_loaders_split).issubset(self.supported_boot_loaders):
raise ValueError(
f"Error with image {self.name} - not all boot_loaders {boot_loaders_split} are"
f" supported {self.supported_boot_loaders}"
)
self._boot_loaders = boot_loaders_split
else:
self._boot_loaders = []
| 25,182
|
Python
|
.py
| 581
| 34.776248
| 120
| 0.617539
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,103
|
distro.py
|
cobbler_cobbler/cobbler/items/distro.py
|
"""
Cobbler module that contains the code for a Cobbler distro object.
Changelog:
Schema: From -> To
V3.4.0 (unreleased):
* Added:
* ``find_distro_path()``
* ``link_distro()``
* Changed:
* Constructor: ``kwargs`` can now be used to seed the item during creation.
* ``children``: The property was moved to the base class.
* ``from_dict()``: The method was moved to the base class.
V3.3.4 (unreleased):
* No changes
V3.3.3:
* Changed:
* ``redhat_management_key``: Inherits from the settings again
V3.3.2:
* No changes
V3.3.1:
* No changes
V3.3.0:
* This release switched from pure attributes to properties (getters/setters).
* Added:
* ``from_dict()``
* Moved to base class (Item):
* ``ctime``: float
* ``depth``: int
* ``mtime``: float
* ``uid``: str
* ``kernel_options``: dict
* ``kernel_options_post``: dict
* ``autoinstall_meta``: dict
* ``boot_files``: list/dict
* ``template_files``: list/dict
* ``comment``: str
* ``name``: str
* ``owners``: list[str]
* Changed:
* ``tree_build_time``: str -> float
* ``arch``: str -> Union[list, str]
* ``fetchable_files``: list/dict? -> dict
* ``boot_loader`` -> boot_loaders (rename)
* Removed:
* ``get_fields()``
* ``get_parent``
* ``set_kernel()`` - Please use the property ``kernel``
* ``set_remote_boot_kernel()`` - Please use the property ``remote_boot_kernel``
* ``set_tree_build_time()`` - Please use the property ``tree_build_time``
* ``set_breed()`` - Please use the property ``breed``
* ``set_os_version()`` - Please use the property ``os_version``
* ``set_initrd()`` - Please use the property ``initrd``
* ``set_remote_boot_initrd()`` - Please use the property ``remote_boot_initrd``
* ``set_source_repos()`` - Please use the property ``source_repos``
* ``set_arch()`` - Please use the property ``arch``
* ``get_arch()`` - Please use the property ``arch``
* ``set_supported_boot_loaders()`` - Please use the property ``supported_boot_loaders``. It is readonly.
* ``set_boot_loader()`` - Please use the property ``boot_loader``
* ``set_redhat_management_key()`` - Please use the property ``redhat_management_key``
* ``get_redhat_management_key()`` - Please use the property ``redhat_management_key``
V3.2.2:
* No changes
V3.2.1:
* Added:
* ``kickstart``: Resolves as a proxy to ``autoinstall``
V3.2.0:
* No changes
V3.1.2:
* Added:
* ``remote_boot_kernel``: str
* ``remote_grub_kernel``: str
* ``remote_boot_initrd``: str
* ``remote_grub_initrd``: str
V3.1.1:
* No changes
V3.1.0:
* Added:
* ``get_arch()``
V3.0.1:
* File was moved from ``cobbler/item_distro.py`` to ``cobbler/items/distro.py``.
V3.0.0:
* Added:
* ``boot_loader``: Union[str, inherit]
* Changed:
* rename: ``ks_meta`` -> ``autoinstall_meta``
* ``redhat_management_key``: Union[str, inherit] -> str
* Removed:
* ``redhat_management_server``: Union[str, inherit]
V2.8.5:
* Inital tracking of changes for the changelog.
* Added:
* ``name``: str
* ``ctime``: float
* ``mtime``: float
* ``uid``: str
* ``owners``: Union[list, SETTINGS:default_ownership]
* ``kernel``: str
* ``initrd``: str
* ``kernel_options``: dict
* ``kernel_options_post``: dict
* ``ks_meta``: dict
* ``arch``: str
* ``breed``: str
* ``os_version``: str
* ``source_repos``: list
* ``depth``: int
* ``comment``: str
* ``tree_build_time``: str
* ``mgmt_classes``: list
* ``boot_files``: list/dict?
* ``fetchable_files``: list/dict?
* ``template_files``: list/dict?
* ``redhat_management_key``: Union[str, inherit]
* ``redhat_management_server``: Union[str, inherit]
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import copy
import glob
import os
from typing import TYPE_CHECKING, Any, Dict, List, Union
from cobbler import enums, grub, utils, validate
from cobbler.cexceptions import CX
from cobbler.decorator import InheritableProperty, LazyProperty
from cobbler.items.abstract.bootable_item import BootableItem
from cobbler.utils import input_converters, signatures
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Distro(BootableItem):
"""
A Cobbler distribution object
"""
# Constants
TYPE_NAME = "distro"
COLLECTION_TYPE = "distro"
def __init__(self, api: "CobblerAPI", *args: Any, **kwargs: Any):
"""
This creates a Distro object.
:param api: The Cobbler API object which is used for resolving information.
"""
super().__init__(api)
# Prevent attempts to clear the to_dict cache before the object is initialized.
self._has_initialized = False
self._tree_build_time = 0.0
self._arch = enums.Archs.X86_64
self._boot_loaders: Union[List[str], str] = enums.VALUE_INHERITED
self._breed = ""
self._initrd = ""
self._kernel = ""
self._mgmt_classes = []
self._os_version = ""
self._redhat_management_key = enums.VALUE_INHERITED
self._source_repos = []
self._remote_boot_kernel = ""
self._remote_grub_kernel = ""
self._remote_boot_initrd = ""
self._remote_grub_initrd = ""
self._supported_boot_loaders: List[str] = []
if len(kwargs) > 0:
self.from_dict(kwargs)
if not self._has_initialized:
self._has_initialized = True
def __getattr__(self, name: str) -> Any:
if name == "ks_meta":
return self.autoinstall_meta
raise AttributeError(f'Attribute "{name}" did not exist on object type Distro.')
#
# override some base class methods first (BootableItem)
#
def make_clone(self):
"""
Clone a distro object.
:return: The cloned object. Not persisted on the disk or in a database.
"""
# FIXME: Change unique base attributes
_dict = copy.deepcopy(self.to_dict())
# Drop attributes which are computed from other attributes
computed_properties = ["remote_grub_initrd", "remote_grub_kernel", "uid"]
for property_name in computed_properties:
_dict.pop(property_name, None)
return Distro(self.api, **_dict)
@classmethod
def _remove_depreacted_dict_keys(cls, dictionary: Dict[Any, Any]):
r"""
See :meth:`~cobbler.items.item.BootableItem._remove_deprecated_dict_keys`.
:param dictionary: The dict to update
"""
if "parent" in dictionary:
dictionary.pop("parent")
super()._remove_depreacted_dict_keys(dictionary)
def check_if_valid(self):
"""
Check if a distro object is valid. If invalid an exception is raised.
"""
super().check_if_valid()
if not self.inmemory:
return
if self.kernel == "" and self.remote_boot_kernel == "":
raise CX(
f"Error with distro {self.name} - either kernel or remote-boot-kernel is required"
)
#
# specific methods for item.Distro
#
@property
def parent(self):
"""
Distros don't have parent objects.
"""
return None
@parent.setter
def parent(self, parent: str):
"""
Setter for the parent property.
:param value: Is ignored.
"""
self.logger.warning(
"Setting the parent of a distribution is not supported. Ignoring action!"
)
@LazyProperty
def kernel(self) -> str:
"""
Specifies a kernel. The kernel parameter is a full path, a filename in the configured kernel directory or a
directory path that would contain a selectable kernel. Kernel naming conventions are checked, see docs in the
utils module for ``find_kernel``.
:getter: The last successfully validated kernel path.
:setter: May raise a ``ValueError`` or ``TypeError`` in case of validation errors.
"""
return self._kernel
@kernel.setter
def kernel(self, kernel: str):
"""
Setter for the ``kernel`` property.
:param kernel: The path to the kernel.
:raises TypeError: If kernel was not of type str.
:raises ValueError: If the kernel was not found.
"""
if not isinstance(kernel, str): # type: ignore
raise TypeError("kernel was not of type str")
if kernel:
if not utils.find_kernel(kernel):
raise ValueError(
"kernel not found or it does not match with allowed kernel filename pattern"
f"[{utils.re_kernel.pattern}]: {kernel}."
)
self._kernel = kernel
@LazyProperty
def remote_boot_kernel(self) -> str:
"""
URL to a remote kernel. If the bootloader supports this feature, it directly tries to retrieve the kernel and
boot it. (grub supports tftp and http protocol and server must be an IP).
:getter: Returns the current remote URL to boot from.
:setter: Raises a ``TypeError`` or ``ValueError`` in case the provided value was not correct.
"""
# TODO: Obsolete it and merge with kernel property
return self._remote_boot_kernel
@remote_boot_kernel.setter
def remote_boot_kernel(self, remote_boot_kernel: str):
"""
Setter for the ``remote_boot_kernel`` property.
:param remote_boot_kernel: The new URL to the remote booted kernel.
:raises TypeError: Raised in case the URL is not of type str.
:raises ValueError: Raised in case the validation is not succeeding.
"""
if not isinstance(remote_boot_kernel, str): # type: ignore
raise TypeError(
"Field remote_boot_kernel of distro needs to be of type str!"
)
if not remote_boot_kernel:
self._remote_grub_kernel = remote_boot_kernel
self._remote_boot_kernel = remote_boot_kernel
return
if not validate.validate_boot_remote_file(remote_boot_kernel):
raise ValueError(
"remote_boot_kernel needs to be a valid URL starting with tftp or http!"
)
parsed_url = grub.parse_grub_remote_file(remote_boot_kernel)
if parsed_url is None:
raise ValueError(
f"Invalid URL for remote boot kernel: {remote_boot_kernel}"
)
self._remote_grub_kernel = parsed_url
self._remote_boot_kernel = remote_boot_kernel
@LazyProperty
def tree_build_time(self) -> float:
"""
Represents the import time of the distro. If not imported, this field is not meaningful.
:getter:
:setter:
"""
return self._tree_build_time
@tree_build_time.setter
def tree_build_time(self, datestamp: float):
r"""
Setter for the ``tree_build_time`` property.
:param datestamp: The datestamp to save the builddate. There is an attempt to convert it to a float, so please
make sure it is compatible to this.
:raises TypeError: In case the value was not of type ``float``.
"""
if isinstance(datestamp, int):
datestamp = float(datestamp)
if not isinstance(datestamp, float): # type: ignore
raise TypeError("datestamp needs to be of type float")
self._tree_build_time = datestamp
@LazyProperty
def breed(self) -> str:
"""
The repository system breed. This decides some defaults for most actions with a repo in Cobbler.
:getter: The breed detected.
:setter: May raise a ``ValueError`` or ``TypeError`` in case the given value is wrong.
"""
return self._breed
@breed.setter
def breed(self, breed: str):
"""
Set the Operating system breed.
:param breed: The new breed to set.
"""
self._breed = validate.validate_breed(breed)
@LazyProperty
def os_version(self) -> str:
r"""
The operating system version which the image contains.
:getter: The sanitized operating system version.
:setter: Accepts a str which will be validated against the ``distro_signatures.json``.
"""
return self._os_version
@os_version.setter
def os_version(self, os_version: str):
"""
Set the Operating System Version.
:param os_version: The new OS Version.
"""
self._os_version = validate.validate_os_version(os_version, self.breed)
@LazyProperty
def initrd(self) -> str:
r"""
Specifies an initrd image. Path search works as in set_kernel. File must be named appropriately.
:getter: The current path to the initrd.
:setter: May raise a ``TypeError`` or ``ValueError`` in case the validation is not successful.
"""
return self._initrd
@initrd.setter
def initrd(self, initrd: str):
r"""
Setter for the ``initrd`` property.
:param initrd: The new path to the ``initrd``.
:raises TypeError: In case the value was not of type ``str``.
:raises ValueError: In case the new value was not found or specified.
"""
if not isinstance(initrd, str): # type: ignore
raise TypeError("initrd must be of type str")
if initrd:
if not utils.find_initrd(initrd):
raise ValueError(f"initrd not found: {initrd}")
self._initrd = initrd
@LazyProperty
def remote_grub_kernel(self) -> str:
r"""
This is tied to the ``remote_boot_kernel`` property. It contains the URL of that field in a format which grub
can use directly.
:getter: The computed URL from ``remote_boot_kernel``.
"""
return self._remote_grub_kernel
@LazyProperty
def remote_grub_initrd(self) -> str:
r"""
This is tied to the ``remote_boot_initrd`` property. It contains the URL of that field in a format which grub
can use directly.
:getter: The computed URL from ``remote_boot_initrd``.
"""
return self._remote_grub_initrd
@LazyProperty
def remote_boot_initrd(self) -> str:
r"""
URL to a remote initrd. If the bootloader supports this feature, it directly tries to retrieve the initrd and
boot it. (grub supports tftp and http protocol and server must be an IP).
:getter: Returns the current remote URL to boot from.
:setter: Raises a ``TypeError`` or ``ValueError`` in case the provided value was not correct.
"""
return self._remote_boot_initrd
@remote_boot_initrd.setter
def remote_boot_initrd(self, remote_boot_initrd: str):
r"""
The setter for the ``remote_boot_initrd`` property.
:param remote_boot_initrd: The new value for the property.
:raises TypeError: In case the value was not of type ``str``.
:raises ValueError: In case the new value could not be validated successfully.
"""
if not isinstance(remote_boot_initrd, str): # type: ignore
raise TypeError("remote_boot_initrd must be of type str!")
if not remote_boot_initrd:
self._remote_boot_initrd = remote_boot_initrd
self._remote_grub_initrd = remote_boot_initrd
return
if not validate.validate_boot_remote_file(remote_boot_initrd):
raise ValueError(
"remote_boot_initrd needs to be a valid URL starting with tftp or http!"
)
parsed_url = grub.parse_grub_remote_file(remote_boot_initrd)
if parsed_url is None:
raise ValueError(
f"Invalid URL for remote boot initrd: {remote_boot_initrd}"
)
self._remote_grub_initrd = parsed_url
self._remote_boot_initrd = remote_boot_initrd
@LazyProperty
def source_repos(self) -> List[str]:
"""
A list of http:// URLs on the Cobbler server that point to yum configuration files that can be used to
install core packages. Use by ``cobbler import`` only.
:getter: The source repos used.
:setter: The new list of source repos to use.
"""
return self._source_repos
@source_repos.setter
def source_repos(self, repos: List[str]):
r"""
Setter for the ``source_repos`` property.
:param repos: The list of URLs.
:raises TypeError: In case the value was not of type ``str``.
"""
if not isinstance(repos, list): # type: ignore
raise TypeError(
"Field source_repos in object distro needs to be of type list."
)
self._source_repos = repos
@LazyProperty
def arch(self):
"""
The field is mainly relevant to PXE provisioning.
Using an alternative distro type allows for dhcpd.conf templating to "do the right thing" with those
systems -- this also relates to bootloader configuration files which have different syntax for different
distro types (because of the bootloaders).
This field is named "arch" because mainly on Linux, we only care about the architecture, though if (in the
future) new provisioning types are added, an arch value might be something like "bsd_x86".
:return: Return the current architecture.
"""
return self._arch
@arch.setter
def arch(self, arch: Union[str, enums.Archs]):
"""
The setter for the arch property.
:param arch: The architecture of the operating system distro.
"""
self._arch = enums.Archs.to_enum(arch)
@property
def supported_boot_loaders(self) -> List[str]:
"""
Some distributions, particularly on powerpc, can only be netbooted using specific bootloaders.
:return: The bootloaders which are available for being set.
"""
if len(self._supported_boot_loaders) == 0:
self._supported_boot_loaders = signatures.get_supported_distro_boot_loaders(
self
)
return self._supported_boot_loaders
@InheritableProperty
def boot_loaders(self) -> List[str]:
"""
All boot loaders for which Cobbler generates entries for.
.. note:: This property can be set to ``<<inherit>>``.
:getter: The bootloaders.
:setter: Validates this against the list of well-known bootloaders and raises a ``TypeError`` or ``ValueError``
in case the validation goes south.
"""
if self._boot_loaders == enums.VALUE_INHERITED:
return self.supported_boot_loaders
# The following line is missleading for pyright since it doesn't understand
# that we use only a constant with str type.
return self._boot_loaders # type: ignore
@boot_loaders.setter # type: ignore[no-redef]
def boot_loaders(self, boot_loaders: Union[str, List[str]]):
"""
Set the bootloader for the distro.
:param boot_loaders: The list with names of the bootloaders. Must be one of the supported ones.
:raises TypeError: In case the value could not be converted to a list or was not of type list.
:raises ValueError: In case the boot loader is not in the list of valid boot loaders.
"""
if isinstance(boot_loaders, str):
# allow the magic inherit string to persist, otherwise split the string.
if boot_loaders == enums.VALUE_INHERITED:
self._boot_loaders = enums.VALUE_INHERITED
return
boot_loaders = input_converters.input_string_or_list(boot_loaders)
if not isinstance(boot_loaders, list): # type: ignore
raise TypeError("boot_loaders needs to be of type list!")
if not set(boot_loaders).issubset(self.supported_boot_loaders):
raise ValueError(
f"Invalid boot loader names: {boot_loaders}. Supported boot loaders are:"
f" {' '.join(self.supported_boot_loaders)}"
)
self._boot_loaders = boot_loaders
@InheritableProperty
def redhat_management_key(self) -> str:
r"""
Get the redhat management key. This is probably only needed if you have spacewalk, uyuni or SUSE Manager
running.
.. note:: This property can be set to ``<<inherit>>``.
:return: The key as a string.
"""
return self._resolve("redhat_management_key")
@redhat_management_key.setter # type: ignore[no-redef]
def redhat_management_key(self, management_key: str):
"""
Set the redhat management key. This is probably only needed if you have spacewalk, uyuni or SUSE Manager
running.
:param management_key: The redhat management key.
"""
if not isinstance(management_key, str): # type: ignore
raise TypeError(
"Field redhat_management_key of object distro needs to be of type str!"
)
self._redhat_management_key = management_key
def find_distro_path(self):
r"""
This returns the absolute path to the distro under the ``distro_mirror`` directory. If that directory doesn't
contain the kernel, the directory of the kernel in the distro is returned.
:return: The path to the distribution files.
"""
possible_dirs = glob.glob(self.api.settings().webdir + "/distro_mirror/*")
for directory in possible_dirs:
if os.path.dirname(self.kernel).find(directory) != -1:
return os.path.join(
self.api.settings().webdir, "distro_mirror", directory
)
# non-standard directory, assume it's the same as the directory in which the given distro's kernel is
return os.path.dirname(self.kernel)
def link_distro(self):
"""
Link a Cobbler distro from its source into the web directory to make it reachable from the outside.
"""
# find the tree location
base = self.find_distro_path()
if not base:
return
dest_link = os.path.join(self.api.settings().webdir, "links", self.name)
# create the links directory only if we are mirroring because with SELinux Apache can't symlink to NFS (without
# some doing)
if not os.path.lexists(dest_link):
try:
os.symlink(base, dest_link)
except Exception:
# FIXME: This shouldn't happen but I've (jsabo) seen it...
self.logger.warning(
"- symlink creation failed: %s, %s", base, dest_link
)
| 23,351
|
Python
|
.py
| 548
| 33.84854
| 119
| 0.614061
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,104
|
inheritable_item.py
|
cobbler_cobbler/cobbler/items/abstract/inheritable_item.py
|
"""
"InheritableItem" the entry point for items that have logical parents and children.
Changelog:
* V3.4.0 (unreleased):
* Initial creation of the class
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Enno Gotthold <enno.gotthold@suse.com>
from abc import ABC
from typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Optional, Union
from cobbler.cexceptions import CX
from cobbler.decorator import LazyProperty
from cobbler.items.abstract.base_item import BaseItem
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
from cobbler.items.menu import Menu
from cobbler.items.profile import Profile
from cobbler.items.system import System
from cobbler.settings import Settings
class HierarchyItem(NamedTuple):
"""
NamedTuple to display the dependency that a single item has.
The `dependant_item_type` object is stored in the `dependant_type_attribute` attribute of the Item.
For example, an item with HierarchyItem("profile", "repos") contains `Profile` objects in the `repos` attribute.
"""
dependant_item_type: str
dependant_type_attribute: str
class LogicalHierarchy(NamedTuple):
"""
NamedTuple to display the order of hierarchy in the dependency tree.
"""
up: List[HierarchyItem]
down: List[HierarchyItem]
class InheritableItem(BaseItem, ABC):
"""
Abstract class that acts as a starting point in the inheritance for items that have a parent and children.
"""
# Constants
TYPE_NAME = "inheritable_abstract"
COLLECTION_TYPE = "inheritable_abstract"
# Item types dependencies.
# Used to determine descendants and cache invalidation.
# Format: {"Item Type": [("Dependent Item Type", "Dependent Type attribute"), ..], [..]}
TYPE_DEPENDENCIES: Dict[str, List[HierarchyItem]] = {
"repo": [
HierarchyItem("profile", "repos"),
],
"distro": [
HierarchyItem("profile", "distro"),
],
"menu": [
HierarchyItem("menu", "parent"),
HierarchyItem("image", "menu"),
HierarchyItem("profile", "menu"),
],
"profile": [
HierarchyItem("profile", "parent"),
HierarchyItem("system", "profile"),
],
"image": [
HierarchyItem("system", "image"),
],
"system": [],
}
# Defines a logical hierarchy of Item Types.
# Format: {"Item Type": [("Previous level Type", "Attribute to go to the previous level",), ..],
# [("Next level Item Type", "Attribute to move from the next level"), ..]}
LOGICAL_INHERITANCE: Dict[str, LogicalHierarchy] = {
"distro": LogicalHierarchy(
[],
[
HierarchyItem("profile", "distro"),
],
),
"profile": LogicalHierarchy(
[
HierarchyItem("distro", "distro"),
],
[
HierarchyItem("system", "profile"),
],
),
"image": LogicalHierarchy(
[],
[
HierarchyItem("system", "image"),
],
),
"system": LogicalHierarchy(
[HierarchyItem("image", "image"), HierarchyItem("profile", "profile")],
[],
),
}
def __init__(
self, api: "CobblerAPI", *args: Any, is_subobject: bool = False, **kwargs: Any
):
"""
Constructor. Requires a back reference to the CobblerAPI object.
NOTE: is_subobject is used for objects that allow inheritance in their trees. This inheritance refers to
conceptual inheritance, not Python inheritance. Objects created with is_subobject need to call their
setter for parent immediately after creation and pass in a value of an object of the same type. Currently this
is only supported for profiles. Sub objects blend their data with their parent objects and only require a valid
parent name and a name for themselves, so other required options can be gathered from items further up the
Cobbler tree.
distro
profile
profile <-- created with is_subobject=True
system <-- created as normal
image
system
menu
menu
For consistency, there is some code supporting this in all object types, though it is only usable
(and only should be used) for profiles at this time. Objects that are children of
objects of the same type (i.e. subprofiles) need to pass this in as True. Otherwise, just
use False for is_subobject and the parent object will (therefore) have a different type.
The keyword arguments are used to seed the object. This is the preferred way over ``from_dict`` starting with
Cobbler version 3.4.0.
:param api: The Cobbler API object which is used for resolving information.
:param is_subobject: See above extensive description.
"""
super().__init__(api, *args, **kwargs)
self._depth = 0
self._parent = ""
self._is_subobject = is_subobject
self._children: List[str] = []
self._inmemory = True
if len(kwargs) > 0:
kwargs.update({"is_subobject": is_subobject})
self.from_dict(kwargs)
if not self._has_initialized:
self._has_initialized = True
@LazyProperty
def depth(self) -> int:
"""
This represents the logical depth of an object in the category of the same items. Important for the order of
loading items from the disk and other related features where the alphabetical order is incorrect for sorting.
:getter: The logical depth of the object.
:setter: The new int for the logical object-depth.
"""
return self._depth
@depth.setter
def depth(self, depth: int) -> None:
"""
Setter for depth.
:param depth: The new value for depth.
"""
if not isinstance(depth, int): # type: ignore
raise TypeError("depth needs to be of type int")
self._depth = depth
@LazyProperty
def parent(self) -> Optional[Union["System", "Profile", "Distro", "Menu"]]:
"""
This property contains the name of the parent of an object. In case there is not parent this return
None.
:getter: Returns the parent object or None if it can't be resolved via the Cobbler API.
:setter: The name of the new logical parent.
"""
if self._parent == "":
return None
return self.api.get_items(self.COLLECTION_TYPE).get(self._parent) # type: ignore
@parent.setter
def parent(self, parent: str) -> None:
"""
Set the parent object for this object.
:param parent: The new parent object. This needs to be a descendant in the logical inheritance chain.
"""
if not isinstance(parent, str): # type: ignore
raise TypeError('Property "parent" must be of type str!')
if not parent:
self._parent = ""
return
if parent == self.name:
# check must be done in two places as setting parent could be called before/after setting name...
raise CX("self parentage is forbidden")
found = self.api.get_items(self.COLLECTION_TYPE).get(parent)
if found is None:
raise CX(f'parent item "{parent}" not found, inheritance not possible')
self._parent = parent
self.depth = found.depth + 1 # type: ignore
@LazyProperty
def get_parent(self) -> str:
"""
This method returns the name of the parent for the object. In case there is not parent this return
empty string.
"""
return self._parent
def get_conceptual_parent(self) -> Optional["InheritableItem"]:
"""
The parent may just be a superclass for something like a sub-profile. Get the first parent of a different type.
:return: The first item which is conceptually not from the same type.
"""
if self is None: # type: ignore
return None
curr_obj = self
next_obj = curr_obj.parent
while next_obj is not None:
curr_obj = next_obj
next_obj = next_obj.parent
if curr_obj.TYPE_NAME in curr_obj.LOGICAL_INHERITANCE:
for prev_level in curr_obj.LOGICAL_INHERITANCE[curr_obj.TYPE_NAME].up:
prev_level_type = prev_level.dependant_item_type
prev_level_name = getattr(
curr_obj, "_" + prev_level.dependant_type_attribute
)
if prev_level_name is not None and prev_level_name != "":
prev_level_item = self.api.find_items(
prev_level_type, name=prev_level_name, return_list=False
)
if prev_level_item is not None and not isinstance(
prev_level_item, list
):
return prev_level_item
return None
@property
def logical_parent(self) -> Any:
"""
This property contains the name of the logical parent of an object. In case there is not parent this return
None.
:getter: Returns the parent object or None if it can't be resolved via the Cobbler API.
:setter: The name of the new logical parent.
"""
parent = self.parent
if parent is None:
return self.get_conceptual_parent()
return parent
@property
def children(self) -> List["InheritableItem"]:
"""
The list of logical children of any depth.
:getter: An empty list in case of items which don't have logical children.
:setter: Replace the list of children completely with the new provided one.
"""
results: List[InheritableItem] = []
list_items = self.api.get_items(self.COLLECTION_TYPE)
for obj in list_items:
if obj.get_parent == self._name: # type: ignore
results.append(obj) # type: ignore
return results
def tree_walk(self) -> List["InheritableItem"]:
"""
Get all children related by parent/child relationship.
:return: The list of children objects.
"""
results: List["InheritableItem"] = []
for child in self.children:
results.append(child)
results.extend(child.tree_walk())
return results
@property
def descendants(self) -> List["InheritableItem"]:
"""
Get objects that depend on this object, i.e. those that would be affected by a cascading delete, etc.
.. note:: This is a read only property.
:getter: This is a list of all descendants. May be empty if none exist.
"""
childs = self.tree_walk()
results = set(childs)
childs.append(self) # type: ignore
for child in childs:
for item_type in self.TYPE_DEPENDENCIES[child.COLLECTION_TYPE]:
dep_type_items = self.api.find_items(
item_type.dependant_item_type,
{item_type.dependant_type_attribute: child.name},
return_list=True,
)
if dep_type_items is None or not isinstance(dep_type_items, list):
raise ValueError("Expected list to be returned by find_items")
results.update(dep_type_items)
for dep_item in dep_type_items:
results.update(dep_item.descendants)
return list(results)
@LazyProperty
def is_subobject(self) -> bool:
"""
Weather the object is a subobject of another object or not.
:getter: True in case the object is a subobject, False otherwise.
:setter: Sets the value. If this is not a bool, this will raise a ``TypeError``.
"""
return self._is_subobject
@is_subobject.setter
def is_subobject(self, value: bool) -> None:
"""
Setter for the property ``is_subobject``.
:param value: The boolean value whether this is a subobject or not.
:raises TypeError: In case the value was not of type bool.
"""
if not isinstance(value, bool): # type: ignore
raise TypeError(
"Field is_subobject of object item needs to be of type bool!"
)
self._is_subobject = value
def grab_tree(self) -> List[Union["InheritableItem", "Settings"]]:
"""
Climb the tree and get every node.
:return: The list of items with all parents from that object upwards the tree. Contains at least the item
itself and the settings of Cobbler.
"""
results: List[Union["InheritableItem", "Settings"]] = [self]
parent = self.logical_parent
while parent is not None:
results.append(parent)
parent = parent.logical_parent
# FIXME: Now get the object and check its existence
results.append(self.api.settings())
self.logger.debug(
"grab_tree found %s children (including settings) of this object",
len(results),
)
return results
| 13,608
|
Python
|
.py
| 311
| 33.581994
| 119
| 0.605331
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,105
|
item_cache.py
|
cobbler_cobbler/cobbler/items/abstract/item_cache.py
|
"""
Module that contains the logic for Cobbler to cache an item.
The cache significantly speeds up Cobbler. This effect is achieved thanks to the reduced amount of lookups that are
required to be done.
"""
from typing import TYPE_CHECKING, Any, Dict, Optional
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class ItemCache:
"""
A Cobbler ItemCache object.
"""
def __init__(self, api: "CobblerAPI"):
"""
Constructor
Generalized parameterized cache format:
cache_key cache_value
{(P1, P2, .., Pn): value}
where P1, .., Pn are cache parameters
Parameterized cache for to_dict(resolved: bool).
The values of the resolved parameter are the key for the Dict.
In the to_dict case, there is only one cache parameter and only two key values:
{True: cache_value or None,
False: cache_value or None}
"""
self._cached_dict: Dict[bool, Optional[Dict[str, Any]]] = {
True: None,
False: None,
}
self.api = api
self.settings = api.settings()
def get_dict_cache(self, resolved: bool) -> Optional[Dict[str, Any]]:
"""
Gettinging the dict cache.
:param resolved: "resolved" parameter for BaseItem.to_dict().
:return: The cache value for the object, or None if not set.
"""
if self.settings.cache_enabled:
return self._cached_dict[resolved]
return None
def set_dict_cache(self, value: Optional[Dict[str, Any]], resolved: bool):
"""
Setter for the dict cache.
:param value: Sets the value for the dict cache.
:param resolved: "resolved" parameter for BaseItem.to_dict().
"""
if self.settings.cache_enabled:
self._cached_dict[resolved] = value
def clean_dict_cache(self):
"""
Cleaninig the dict cache.
"""
if self.settings.cache_enabled:
self.set_dict_cache(None, True)
self.set_dict_cache(None, False)
def clean_cache(self):
"""
Cleaning the Item cache.
"""
if self.settings.cache_enabled:
self.clean_dict_cache()
| 2,245
|
Python
|
.py
| 61
| 28.622951
| 115
| 0.613186
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,106
|
__init__.py
|
cobbler_cobbler/cobbler/items/abstract/__init__.py
|
"""
Package to describe the abstract items that we define.
"""
| 63
|
Python
|
.py
| 3
| 20
| 54
| 0.75
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,107
|
base_item.py
|
cobbler_cobbler/cobbler/items/abstract/base_item.py
|
"""
"BaseItem" is the highest point in the object hierarchy of Cobbler. All concrete objects that can be generated should
inherit from it or one of its derived classes.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import copy
import enum
import fnmatch
import logging
import re
import uuid
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
from cobbler import enums
from cobbler.cexceptions import CX
from cobbler.decorator import InheritableProperty, LazyProperty
from cobbler.items.abstract.item_cache import ItemCache
from cobbler.utils import input_converters
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
RE_OBJECT_NAME = re.compile(r"[a-zA-Z0-9_\-.:]*$")
class BaseItem(ABC):
"""
Abstract base class to represent the common attributes that a concrete item needs to have at minimum.
"""
# Constants
TYPE_NAME = "base"
COLLECTION_TYPE = "base"
@staticmethod
def _is_dict_key(name: str) -> bool:
"""
Whether the attribute is part of the item's to_dict or not
:name: The attribute name.
"""
return (
name[:1] == "_"
and "__" not in name
and name
not in {
"_cache",
"_supported_boot_loaders",
"_has_initialized",
"_inmemory",
}
)
@classmethod
def __find_compare(
cls,
from_search: Union[str, List[Any], Dict[Any, Any], bool],
from_obj: Union[str, List[Any], Dict[Any, Any], bool],
) -> bool:
"""
Only one of the two parameters shall be given in this method. If you give both ``from_obj`` will be preferred.
:param from_search: Tries to parse this str in the format as a search result string.
:param from_obj: Tries to parse this str in the format of an obj str.
:return: True if the comparison succeeded, False otherwise.
:raises TypeError: In case the type of one of the two variables is wrong or could not be converted
intelligently.
"""
del cls
if isinstance(from_obj, str):
# FIXME: fnmatch is only used for string to string comparisons which should cover most major usage, if
# not, this deserves fixing
from_obj_lower = from_obj.lower()
from_search_lower = from_search.lower() # type: ignore
# It's much faster to not use fnmatch if it's not needed
if (
"?" not in from_search_lower
and "*" not in from_search_lower
and "[" not in from_search_lower
):
match = from_obj_lower == from_search_lower # type: ignore
else:
match = fnmatch.fnmatch(from_obj_lower, from_search_lower) # type: ignore
return match # type: ignore
if isinstance(from_search, str):
if isinstance(from_obj, list):
from_search = input_converters.input_string_or_list(from_search)
for list_element in from_search:
if list_element not in from_obj:
return False
return True
if isinstance(from_obj, dict):
from_search = input_converters.input_string_or_dict(
from_search, allow_multiples=True
)
for dict_key in list(from_search.keys()): # type: ignore
dict_value = from_search[dict_key]
if dict_key not in from_obj:
return False
if not dict_value == from_obj[dict_key]:
return False
return True
if isinstance(from_obj, bool): # type: ignore
inp = from_search.lower() in ["true", "1", "y", "yes"]
if inp == from_obj:
return True
return False
raise TypeError(f"find cannot compare type: {type(from_obj)}")
def __init__(self, api: "CobblerAPI", *args: Any, **kwargs: Any):
# Prevent attempts to clear the to_dict cache before the object is initialized.
self._has_initialized = False
# Attributes
self._ctime = 0.0
self._mtime = 0.0
self._uid = uuid.uuid4().hex
self._name = ""
self._comment = ""
self._owners: Union[List[str], str] = enums.VALUE_INHERITED
self._inmemory = (
False # Set this to true after the last attribute has been initialized.
)
# Item Cache
self._cache: ItemCache = ItemCache(api)
# Global/Internal API
self.api = api
# Logger
self.logger = logging.getLogger()
# Bootstrap rest of the properties
if len(kwargs) > 0:
self.from_dict(kwargs)
if self._uid == "":
self._uid = uuid.uuid4().hex
def __eq__(self, other: Any) -> bool:
"""
Comparison based on the uid for our items.
:param other: The other Item to compare.
:return: True if uid is equal, otherwise false.
"""
if isinstance(other, BaseItem):
return self._uid == other.uid
return False
def __hash__(self):
"""
Hash table for Items.
Requires special handling if the uid value changes and the Item
is present in set, frozenset, and dict types.
:return: hash(uid).
"""
return hash(self._uid)
@property
def uid(self) -> str:
"""
The uid is the internal unique representation of a Cobbler object. It should never be used twice, even after an
object was deleted.
:getter: The uid for the item. Should be unique across a running Cobbler instance.
:setter: The new uid for the object. Should only be used by the Cobbler Item Factory.
"""
return self._uid
@uid.setter
def uid(self, uid: str) -> None:
"""
Setter for the uid of the item.
:param uid: The new uid.
"""
if self._uid != uid and self.COLLECTION_TYPE != BaseItem.COLLECTION_TYPE:
collection = self.api.get_items(self.COLLECTION_TYPE)
with collection.lock:
item = collection.get(self.name)
if item is not None and item.uid == self._uid:
# Update uid index
indx_dict = collection.indexes["uid"]
del indx_dict[self._uid]
indx_dict[uid] = self.name
self._uid = uid
@property
def ctime(self) -> float:
"""
Property which represents the creation time of the object.
:getter: The float which can be passed to Python time stdlib.
:setter: Should only be used by the Cobbler Item Factory.
"""
return self._ctime
@ctime.setter
def ctime(self, ctime: float) -> None:
"""
Setter for the ctime property.
:param ctime: The time the object was created.
:raises TypeError: In case ``ctime`` was not of type float.
"""
if not isinstance(ctime, float): # type: ignore
raise TypeError("ctime needs to be of type float")
self._ctime = ctime
@property
def mtime(self) -> float:
"""
Represents the last modification time of the object via the API. This is not updated automagically.
:getter: The float which can be fed into a Python time object.
:setter: The new time something was edited via the API.
"""
return self._mtime
@mtime.setter
def mtime(self, mtime: float) -> None:
"""
Setter for the modification time of the object.
:param mtime: The new modification time.
"""
if not isinstance(mtime, float): # type: ignore
raise TypeError("mtime needs to be of type float")
self._mtime = mtime
@property
def name(self) -> str:
"""
Property which represents the objects name.
:getter: The name of the object.
:setter: Updating this has broad implications. Please try to use the ``rename()`` functionality from the
corresponding collection.
"""
return self._name
@name.setter
def name(self, name: str) -> None:
"""
The objects name.
:param name: object name string
:raises TypeError: In case ``name`` was not of type str.
:raises ValueError: In case there were disallowed characters in the name.
"""
if not isinstance(name, str): # type: ignore
raise TypeError("name must of be type str")
if not RE_OBJECT_NAME.match(name):
raise ValueError(f"Invalid characters in name: '{name}'")
self._name = name
@LazyProperty
def comment(self) -> str:
"""
For every object you are able to set a unique comment which will be persisted on the object.
:getter: The comment or an emtpy string.
:setter: The new comment for the item.
"""
return self._comment
@comment.setter
def comment(self, comment: str) -> None:
"""
Setter for the comment of the item.
:param comment: The new comment. If ``None`` the comment will be set to an emtpy string.
"""
self._comment = comment
@InheritableProperty
def owners(self) -> List[Any]:
"""
This is a feature which is related to the ownership module of Cobbler which gives only specific people access
to specific records. Otherwise this is just a cosmetic feature to allow assigning records to specific users.
.. warning:: This is never validated against a list of existing users. Thus you can lock yourself out of a
record.
.. note:: This property can be set to ``<<inherit>>``.
:getter: Return the list of users which are currently assigned to the record.
:setter: The list of people which should be new owners. May lock you out if you are using the ownership
authorization module.
"""
return self._resolve("owners")
@owners.setter # type: ignore[no-redef]
def owners(self, owners: Union[str, List[Any]]):
"""
Setter for the ``owners`` property.
:param owners: The new list of owners. Will not be validated for existence.
"""
if not isinstance(owners, (str, list)): # type: ignore
raise TypeError("owners must be str or list!")
self._owners = self.api.input_string_or_list(owners)
@property
def inmemory(self) -> bool:
r"""
If set to ``false``, only the Item name is in memory. The rest of the Item's properties can be retrieved
either on demand or as a result of the ``load_items`` background task.
:getter: The inmemory for the item.
:setter: The new inmemory value for the object. Should only be used by the Cobbler serializers.
"""
return self._inmemory
@inmemory.setter
def inmemory(self, inmemory: bool):
"""
Setter for the inmemory of the item.
:param inmemory: The new inmemory value.
"""
self._inmemory = inmemory
@property
def cache(self) -> ItemCache:
"""
Getting the ItemCache object.
.. note:: This is a read only property.
:getter: This is the ItemCache object.
"""
return self._cache
def check_if_valid(self) -> None:
"""
Raise exceptions if the object state is inconsistent.
:raises CX: In case the name of the item is not set.
"""
if not self.name:
raise CX("Name is required")
@abstractmethod
def make_clone(self) -> "BaseItem":
"""
Must be defined in any subclass
"""
@abstractmethod
def _resolve(self, property_name: str) -> Any:
"""
Resolve the ``property_name`` value in the object tree. This function traverses the tree from the object to its
topmost parent and returns the first value that is not inherited. If the tree does not contain a value the
settings are consulted.
Items that don't have the concept of Inheritance via parent objects may still inherit from the settings. It is
the responsibility of the concrete class to implement the correct behavior.
:param property_name: The property name to resolve.
:raises AttributeError: In case one of the objects try to inherit from a parent that does not have
``property_name``.
:return: The resolved value.
"""
raise NotImplementedError("Must be implemented in a specific Item")
@classmethod
def _remove_depreacted_dict_keys(cls, dictionary: Dict[Any, Any]) -> None:
"""
This method does remove keys which should not be deserialized and are only there for API compatibility in
:meth:`~cobbler.items.abstract.base_item.BaseItem.to_dict`.
:param dictionary: The dict to update
"""
if "ks_meta" in dictionary:
dictionary.pop("ks_meta")
if "kickstart" in dictionary:
dictionary.pop("kickstart")
if "children" in dictionary:
dictionary.pop("children")
def sort_key(self, sort_fields: List[Any]):
"""
Convert the item to a dict and sort the data after specific given fields.
:param sort_fields: The fields to sort the data after.
:return: The sorted data.
"""
data = self.to_dict()
return [data.get(x, "") for x in sort_fields]
def find_match(self, kwargs: Dict[str, Any], no_errors: bool = False) -> bool:
"""
Find from a given dict if the item matches the kv-pairs.
:param kwargs: The dict to match for in this item.
:param no_errors: How strict this matching is.
:return: True if matches or False if the item does not match.
"""
# used by find() method in collection.py
data = self.to_dict()
for (key, value) in list(kwargs.items()):
# Allow ~ to negate the compare
if value is not None and value.startswith("~"):
res = not self.find_match_single_key(data, key, value[1:], no_errors)
else:
res = self.find_match_single_key(data, key, value, no_errors)
if not res:
return False
return True
def find_match_single_key(
self, data: Dict[str, Any], key: str, value: Any, no_errors: bool = False
) -> bool:
"""
Look if the data matches or not. This is an alternative for ``find_match()``.
:param data: The data to search through.
:param key: The key to look for int the item.
:param value: The value for the key.
:param no_errors: How strict this matching is.
:return: Whether the data matches or not.
"""
# special case for systems
key_found_already = False
if "interfaces" in data:
if key in [
"cnames",
"connected_mode",
"if_gateway",
"ipv6_default_gateway",
"ipv6_mtu",
"ipv6_prefix",
"ipv6_secondaries",
"ipv6_static_routes",
"management",
"mtu",
"static",
"mac_address",
"ip_address",
"ipv6_address",
"netmask",
"virt_bridge",
"dhcp_tag",
"dns_name",
"static_routes",
"interface_type",
"interface_master",
"bonding_opts",
"bridge_opts",
"interface",
]:
key_found_already = True
for (name, interface) in list(data["interfaces"].items()):
if value == name:
return True
if value is not None and key in interface:
if self.__find_compare(interface[key], value):
return True
if key not in data:
if not key_found_already:
if not no_errors:
# FIXME: removed for 2.0 code, shouldn't cause any problems to not have an exception here?
# raise CX("searching for field that does not exist: %s" % key)
return False
else:
if value is not None: # FIXME: new?
return False
if value is None:
return True
return self.__find_compare(value, data[key])
def serialize(self) -> Dict[str, Any]:
"""
This method is a proxy for :meth:`~cobbler.items.abstract.base_item.BaseItem.to_dict` and contains additional
logic for serialization to a persistent location.
:return: The dictionary with the information for serialization.
"""
keys_to_drop = [
"kickstart",
"ks_meta",
"remote_grub_kernel",
"remote_grub_initrd",
]
result = self.to_dict()
for key in keys_to_drop:
result.pop(key, "")
return result
def deserialize(self) -> None:
"""
Deserializes the object itself and, if necessary, recursively all the objects it depends on.
"""
if not self._has_initialized:
return
item_dict = self.api.deserialize_item(self)
self.from_dict(item_dict)
def from_dict(self, dictionary: Dict[Any, Any]) -> None:
"""
Modify this object to take on values in ``dictionary``.
:param dictionary: This should contain all values which should be updated.
:raises AttributeError: In case during the process of setting a value for an attribute an error occurred.
:raises KeyError: In case there were keys which could not be set in the item dictionary.
"""
self._remove_depreacted_dict_keys(dictionary)
if len(dictionary) == 0:
return
old_has_initialized = self._has_initialized
self._has_initialized = False
result = copy.deepcopy(dictionary)
for key in dictionary:
lowered_key = key.lower()
# The following also works for child classes because self is a child class at this point and not only an
# Item.
if hasattr(self, "_" + lowered_key):
try:
setattr(self, lowered_key, dictionary[key])
except AttributeError as error:
raise AttributeError(
f'Attribute "{lowered_key}" could not be set!'
) from error
result.pop(key)
self._has_initialized = old_has_initialized
self.clean_cache()
if len(result) > 0:
raise KeyError(
f"The following keys supplied could not be set: {result.keys()}"
)
def to_dict(self, resolved: bool = False) -> Dict[Any, Any]:
"""
This converts everything in this object to a dictionary.
:param resolved: If this is True, Cobbler will resolve the values to its final form, rather than give you the
objects raw value.
:return: A dictionary with all values present in this object.
"""
if not self.inmemory:
self.deserialize()
cached_result = self.cache.get_dict_cache(resolved)
if cached_result is not None:
return cached_result
value: Dict[str, Any] = {}
for key, key_value in self.__dict__.items():
if BaseItem._is_dict_key(key):
new_key = key[1:].lower()
if isinstance(key_value, enum.Enum):
if resolved:
value[new_key] = getattr(self, new_key).value
else:
value[new_key] = key_value.value
elif new_key == "interfaces":
# This is the special interfaces dict. Let's fix it before it gets to the normal process.
serialized_interfaces = {}
interfaces = key_value
for interface_key in interfaces:
serialized_interfaces[interface_key] = interfaces[
interface_key
].to_dict(resolved)
value[new_key] = serialized_interfaces
elif isinstance(key_value, list):
value[new_key] = copy.deepcopy(key_value) # type: ignore
elif isinstance(key_value, dict):
if resolved:
value[new_key] = getattr(self, new_key)
else:
value[new_key] = copy.deepcopy(key_value) # type: ignore
elif (
isinstance(key_value, str)
and key_value == enums.VALUE_INHERITED
and resolved
):
value[new_key] = getattr(self, key[1:])
else:
value[new_key] = key_value
if "autoinstall" in value:
value.update({"kickstart": value["autoinstall"]}) # type: ignore
if "autoinstall_meta" in value:
value.update({"ks_meta": value["autoinstall_meta"]})
self.cache.set_dict_cache(value, resolved)
return value
def _clean_dict_cache(self, name: Optional[str]):
"""
Clearing the Item dict cache.
:param name: The name of Item attribute or None.
"""
if not self.api.settings().cache_enabled:
return
# Invalidating the cache of the object itself.
self.cache.clean_dict_cache()
def clean_cache(self, name: Optional[str] = None):
"""
Clearing the Item cache.
:param name: The name of Item attribute or None.
"""
if self._inmemory:
self._clean_dict_cache(name)
| 22,544
|
Python
|
.py
| 531
| 31.308851
| 119
| 0.57599
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,108
|
bootable_item.py
|
cobbler_cobbler/cobbler/items/abstract/bootable_item.py
|
"""
Cobbler module that contains the code for a generic Cobbler item.
Changelog:
V3.4.0 (unreleased):
* Renamed to BootableItem
* (Re-)Added Cache implementation with the following new methods and properties:
* ``cache``
* ``inmemory``
* ``clean_cache()``
* Overhauled the parent/child system:
* ``children`` is now inside ``item.py``.
* ``tree_walk()`` was added.
* ``logical_parent`` was added.
* ``get_parent()`` was added which returns the internal reference that is used to return the object of the
``parent`` property.
* Removed:
* mgmt_classes
* mgmt_parameters
* last_cached_mtime
* fetchable_files
* boot_files
V3.3.4 (unreleased):
* No changes
V3.3.3:
* Added:
* ``grab_tree``
V3.3.2:
* No changes
V3.3.1:
* No changes
V3.3.0:
* This release switched from pure attributes to properties (getters/setters).
* Added:
* ``depth``: int
* ``comment``: str
* ``owners``: Union[list, str]
* ``mgmt_classes``: Union[list, str]
* ``mgmt_classes``: Union[dict, str]
* ``conceptual_parent``: Union[distro, profile]
* Removed:
* collection_mgr: collection_mgr
* Remove unreliable caching:
* ``get_from_cache()``
* ``set_cache()``
* ``remove_from_cache()``
* Changed:
* Constructor: Takes an instance of ``CobblerAPI`` instead of ``CollectionManager``.
* ``children``: dict -> list
* ``ctime``: int -> float
* ``mtime``: int -> float
* ``uid``: str
* ``kernel_options``: dict -> Union[dict, str]
* ``kernel_options_post``: dict -> Union[dict, str]
* ``autoinstall_meta``: dict -> Union[dict, str]
* ``fetchable_files``: dict -> Union[dict, str]
* ``boot_files``: dict -> Union[dict, str]
V3.2.2:
* No changes
V3.2.1:
* No changes
V3.2.0:
* No changes
V3.1.2:
* No changes
V3.1.1:
* No changes
V3.1.0:
* No changes
V3.0.1:
* No changes
V3.0.0:
* Added:
* ``collection_mgr``: collection_mgr
* ``kernel_options``: dict
* ``kernel_options_post``: dict
* ``autoinstall_meta``: dict
* ``fetchable_files``: dict
* ``boot_files``: dict
* ``template_files``: dict
* ``name``: str
* ``last_cached_mtime``: int
* Changed:
* Rename: ``cached_datastruct`` -> ``cached_dict``
* Removed:
* ``config``
V2.8.5:
* Added:
* ``config``: ?
* ``settings``: settings
* ``is_subobject``: bool
* ``parent``: Union[distro, profile]
* ``children``: dict
* ``log_func``: collection_mgr.api.log
* ``ctime``: int
* ``mtime``: int
* ``uid``: str
* ``last_cached_mtime``: int
* ``cached_datastruct``: str
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import pprint
from abc import ABC
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Type, TypeVar, Union
from cobbler import enums, utils
from cobbler.decorator import InheritableDictProperty, InheritableProperty, LazyProperty
from cobbler.items.abstract.inheritable_item import InheritableItem
from cobbler.utils import input_converters
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
T = TypeVar("T")
class BootableItem(InheritableItem, ABC):
"""
A BootableItem is a serializable thing that can appear in a Collection
"""
# Constants
TYPE_NAME = "bootable_abstract"
COLLECTION_TYPE = "bootable_abstract"
def __init__(
self, api: "CobblerAPI", *args: Any, is_subobject: bool = False, **kwargs: Any
):
"""
Constructor. This is a legacy class that will be phased out with the 3.4.0 release.
:param api: The Cobbler API object which is used for resolving information.
:param is_subobject: See above extensive description.
"""
super().__init__(api, *args, **kwargs)
self._kernel_options: Union[Dict[Any, Any], str] = {}
self._kernel_options_post: Union[Dict[Any, Any], str] = {}
self._autoinstall_meta: Union[Dict[Any, Any], str] = {}
self._template_files: Dict[str, str] = {}
self._inmemory = True
if len(kwargs) > 0:
kwargs.update({"is_subobject": is_subobject})
self.from_dict(kwargs)
if not self._has_initialized:
self._has_initialized = True
def __setattr__(self, name: str, value: Any):
"""
Intercepting an attempt to assign a value to an attribute.
:name: The attribute name.
:value: The attribute value.
"""
if BootableItem._is_dict_key(name) and self._has_initialized:
self.clean_cache(name)
super().__setattr__(name, value)
def __common_resolve(self, property_name: str):
settings_name = property_name
if property_name.startswith("proxy_url_"):
property_name = "proxy"
if property_name == "owners":
settings_name = "default_ownership"
attribute = "_" + property_name
return getattr(self, attribute), settings_name
def __resolve_get_parent_or_settings(self, property_name: str, settings_name: str):
settings = self.api.settings()
conceptual_parent = self.get_conceptual_parent()
if hasattr(self.parent, property_name):
return getattr(self.parent, property_name)
elif hasattr(conceptual_parent, property_name):
return getattr(conceptual_parent, property_name)
elif hasattr(settings, settings_name):
return getattr(settings, settings_name)
elif hasattr(settings, f"default_{settings_name}"):
return getattr(settings, f"default_{settings_name}")
return None
def _resolve(self, property_name: str) -> Any:
"""
Resolve the ``property_name`` value in the object tree. This function traverses the tree from the object to its
topmost parent and returns the first value that is not inherited. If the tree does not contain a value the
settings are consulted.
:param property_name: The property name to resolve.
:raises AttributeError: In case one of the objects try to inherit from a parent that does not have
``property_name``.
:return: The resolved value.
"""
attribute_value, settings_name = self.__common_resolve(property_name)
if attribute_value == enums.VALUE_INHERITED:
possible_return = self.__resolve_get_parent_or_settings(
property_name, settings_name
)
if possible_return is not None:
return possible_return
raise AttributeError(
f'{type(self)} "{self.name}" inherits property "{property_name}", but neither its parent nor'
f" settings have it"
)
return attribute_value
def _resolve_enum(
self, property_name: str, enum_type: Type[enums.ConvertableEnum]
) -> Any:
"""
See :meth:`~cobbler.items.abstract.bootable_item.BootableItem._resolve`
"""
attribute_value, settings_name = self.__common_resolve(property_name)
unwrapped_value = getattr(attribute_value, "value", "")
if unwrapped_value == enums.VALUE_INHERITED:
possible_return = self.__resolve_get_parent_or_settings(
unwrapped_value, settings_name
)
if possible_return is not None:
return enum_type(possible_return)
raise AttributeError(
f'{type(self)} "{self.name}" inherits property "{property_name}", but neither its parent nor'
f" settings have it"
)
return attribute_value
def _resolve_dict(self, property_name: str) -> Dict[str, Any]:
"""
Merge the ``property_name`` dictionary of the object with the ``property_name`` of all its parents. The value
of the child takes precedence over the value of the parent.
:param property_name: The property name to resolve.
:return: The merged dictionary.
:raises AttributeError: In case the the the object had no attribute with the name :py:property_name: .
"""
attribute = "_" + property_name
attribute_value = getattr(self, attribute)
settings = self.api.settings()
merged_dict: Dict[str, Any] = {}
conceptual_parent = self.get_conceptual_parent()
if hasattr(conceptual_parent, property_name):
merged_dict.update(getattr(conceptual_parent, property_name))
elif hasattr(settings, property_name):
merged_dict.update(getattr(settings, property_name))
if attribute_value != enums.VALUE_INHERITED:
merged_dict.update(attribute_value)
utils.dict_annihilate(merged_dict)
return merged_dict
def _deduplicate_dict(
self, property_name: str, value: Dict[str, T]
) -> Dict[str, T]:
"""
Filter out the key:value pair may come from parent and global settings.
Note: we do not know exactly which resolver does key:value belongs to, what we did is just deduplicate them.
:param property_name: The property name to deduplicated.
:param value: The value that should be deduplicated.
:returns: The deduplicated dictionary
"""
_, settings_name = self.__common_resolve(property_name)
settings = self.api.settings()
conceptual_parent = self.get_conceptual_parent()
if hasattr(self.parent, property_name):
parent_value = getattr(self.parent, property_name)
elif hasattr(conceptual_parent, property_name):
parent_value = getattr(conceptual_parent, property_name)
elif hasattr(settings, settings_name):
parent_value = getattr(settings, settings_name)
elif hasattr(settings, f"default_{settings_name}"):
parent_value = getattr(settings, f"default_{settings_name}")
else:
parent_value = {}
# Because we use getattr pyright cannot correctly check this.
for key in parent_value: # type: ignore
if key in value and parent_value[key] == value[key]: # type: ignore
value.pop(key) # type: ignore
return value
@InheritableDictProperty
def kernel_options(self) -> Dict[Any, Any]:
"""
Kernel options are a space delimited list, like 'a=b c=d e=f g h i=j' or a dict.
.. note:: This property can be set to ``<<inherit>>``.
:getter: The parsed kernel options.
:setter: The new kernel options as a space delimited list. May raise ``ValueError`` in case of parsing problems.
"""
return self._resolve_dict("kernel_options")
@kernel_options.setter # type: ignore[no-redef]
def kernel_options(self, options: Dict[str, Any]):
"""
Setter for ``kernel_options``.
:param options: The new kernel options as a space delimited list.
:raises ValueError: In case the values set could not be parsed successfully.
"""
try:
value = input_converters.input_string_or_dict(options, allow_multiples=True)
if value == enums.VALUE_INHERITED:
self._kernel_options = enums.VALUE_INHERITED
return
# pyright doesn't understand that the only valid str return value is this constant.
self._kernel_options = self._deduplicate_dict("kernel_options", value) # type: ignore
except TypeError as error:
raise TypeError("invalid kernel value") from error
@InheritableDictProperty
def kernel_options_post(self) -> Dict[str, Any]:
"""
Post kernel options are a space delimited list, like 'a=b c=d e=f g h i=j' or a dict.
.. note:: This property can be set to ``<<inherit>>``.
:getter: The dictionary with the parsed values.
:setter: Accepts str in above mentioned format or directly a dict.
"""
return self._resolve_dict("kernel_options_post")
@kernel_options_post.setter # type: ignore[no-redef]
def kernel_options_post(self, options: Union[Dict[Any, Any], str]) -> None:
"""
Setter for ``kernel_options_post``.
:param options: The new kernel options as a space delimited list.
:raises ValueError: In case the options could not be split successfully.
"""
try:
self._kernel_options_post = input_converters.input_string_or_dict(
options, allow_multiples=True
)
except TypeError as error:
raise TypeError("invalid post kernel options") from error
@InheritableDictProperty
def autoinstall_meta(self) -> Dict[Any, Any]:
"""
A comma delimited list of key value pairs, like 'a=b,c=d,e=f' or a dict.
The meta tags are used as input to the templating system to preprocess automatic installation template files.
.. note:: This property can be set to ``<<inherit>>``.
:getter: The metadata or an empty dict.
:setter: Accepts anything which can be split by :meth:`~cobbler.utils.input_converters.input_string_or_dict`.
"""
return self._resolve_dict("autoinstall_meta")
@autoinstall_meta.setter # type: ignore[no-redef]
def autoinstall_meta(self, options: Dict[Any, Any]):
"""
Setter for the ``autoinstall_meta`` property.
:param options: The new options for the automatic installation meta options.
:raises ValueError: If splitting the value does not succeed.
"""
value = input_converters.input_string_or_dict(options, allow_multiples=True)
if value == enums.VALUE_INHERITED:
self._autoinstall_meta = enums.VALUE_INHERITED
return
# pyright doesn't understand that the only valid str return value is this constant.
self._autoinstall_meta = self._deduplicate_dict("autoinstall_meta", value) # type: ignore
@LazyProperty
def template_files(self) -> Dict[str, str]:
"""
File mappings for built-in configuration management. The keys are the template source files and the value is the
destination. The destination must be inside the bootloc (most of the time TFTP server directory).
This property took over the duties of boot_files additionaly. During signature import the values of "boot_files"
will be added to "template_files".
:getter: The dictionary with name-path key-value pairs.
:setter: A dict. If not a dict must be a str which is split by
:meth:`~cobbler.utils.input_converters.input_string_or_dict`. Raises ``TypeError`` otherwise.
"""
return self._template_files
@template_files.setter
def template_files(self, template_files: Union[str, Dict[str, str]]) -> None:
"""
A comma seperated list of source=destination templates that should be generated during a sync.
:param template_files: The new value for the template files which are used for the item.
:raises ValueError: In case the conversion from non dict values was not successful.
"""
try:
self._template_files = input_converters.input_string_or_dict_no_inherit(
template_files, allow_multiples=False
)
except TypeError as error:
raise TypeError("invalid template files specified") from error
def dump_vars(
self, formatted_output: bool = True, remove_dicts: bool = False
) -> Union[Dict[str, Any], str]:
"""
Dump all variables.
:param formatted_output: Whether to format the output or not.
:param remove_dicts: If True the dictionaries will be put into str form.
:return: The raw or formatted data.
"""
raw = utils.blender(self.api, remove_dicts, self) # type: ignore
if formatted_output:
return pprint.pformat(raw)
return raw
def deserialize(self) -> None:
"""
Deserializes the object itself and, if necessary, recursively all the objects it depends on.
"""
def deserialize_ancestor(ancestor_item_type: str, ancestor_name: str):
if ancestor_name not in {"", enums.VALUE_INHERITED}:
ancestor = self.api.get_items(ancestor_item_type).get(ancestor_name)
if ancestor is not None and not ancestor.inmemory:
ancestor.deserialize()
if not self._has_initialized:
return
item_dict = self.api.deserialize_item(self)
if item_dict["inmemory"]:
for (
ancestor_item_type,
ancestor_deps,
) in InheritableItem.TYPE_DEPENDENCIES.items():
for ancestor_dep in ancestor_deps:
if self.TYPE_NAME == ancestor_dep.dependant_item_type:
attr_name = ancestor_dep.dependant_type_attribute
if attr_name not in item_dict:
continue
attr_val = item_dict[attr_name]
if isinstance(attr_val, str):
deserialize_ancestor(ancestor_item_type, attr_val)
elif isinstance(attr_val, list): # type: ignore
attr_val: List[str]
for ancestor_name in attr_val:
deserialize_ancestor(ancestor_item_type, ancestor_name)
self.from_dict(item_dict)
def _clean_dict_cache(self, name: Optional[str]):
"""
Clearing the Item dict cache.
:param name: The name of Item attribute or None.
"""
if not self.api.settings().cache_enabled:
return
if name is not None and self._inmemory:
attr = getattr(type(self), name[1:])
if (
isinstance(attr, (InheritableProperty, InheritableDictProperty))
and self.COLLECTION_TYPE != InheritableItem.COLLECTION_TYPE # type: ignore
and self.api.get_items(self.COLLECTION_TYPE).get(self.name) is not None
):
# Invalidating "resolved" caches
for dep_item in self.descendants:
dep_item.cache.set_dict_cache(None, True)
# Invalidating the cache of the object itself.
self.cache.clean_dict_cache()
| 18,869
|
Python
|
.py
| 414
| 36.033816
| 120
| 0.618276
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,109
|
repos.py
|
cobbler_cobbler/cobbler/cobbler_collections/repos.py
|
"""
Cobbler module that at runtime holds all repos in Cobbler.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import os.path
from typing import TYPE_CHECKING, Any, Dict
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.cobbler_collections import collection
from cobbler.items import repo
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Repos(collection.Collection[repo.Repo]):
"""
Repositories in Cobbler are way to create a local mirror of a yum repository.
When used in conjunction with a mirrored distro tree (see "cobbler import"),
outside bandwidth needs can be reduced and/or eliminated.
"""
@staticmethod
def collection_type() -> str:
return "repo"
@staticmethod
def collection_types() -> str:
return "repos"
def factory_produce(self, api: "CobblerAPI", seed_data: Dict[str, Any]):
"""
Return a Distro forged from seed_data
:param api: Parameter is skipped.
:param seed_data: The data the object is initalized with.
:returns: The created repository.
"""
return repo.Repo(self.api, **seed_data)
def remove(
self,
name: str,
with_delete: bool = True,
with_sync: bool = True,
with_triggers: bool = True,
recursive: bool = False,
):
"""
Remove element named 'name' from the collection
:raises CX: In case the object does not exist.
"""
# NOTE: with_delete isn't currently meaningful for repos
# but is left in for consistancy in the API. Unused.
obj = self.find(name=name)
if obj is None:
raise CX(f"cannot delete an object that does not exist: {name}")
if isinstance(obj, list):
# Will never happen, but we want to make mypy happy.
raise CX("Ambiguous match detected!")
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/repo/pre/*", []
)
with self.lock:
self.remove_from_indexes(obj)
del self.listing[name]
self.collection_mgr.serialize_delete(self, obj)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/repo/post/*", []
)
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/change/*", []
)
path = os.path.join(self.api.settings().webdir, "repo_mirror", obj.name)
if os.path.exists(path):
filesystem_helpers.rmtree(path)
| 2,936
|
Python
|
.py
| 75
| 30.493333
| 85
| 0.626099
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,110
|
images.py
|
cobbler_cobbler/cobbler/cobbler_collections/images.py
|
"""
Cobbler module that at runtime holds all images in Cobbler.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
from typing import TYPE_CHECKING, Any, Dict
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.cobbler_collections import collection
from cobbler.items import image
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Images(collection.Collection[image.Image]):
"""
A image instance represents a ISO or virt image we want to track
and repeatedly install. It differs from a answer-file based installation.
"""
@staticmethod
def collection_type() -> str:
return "image"
@staticmethod
def collection_types() -> str:
return "images"
def factory_produce(self, api: "CobblerAPI", seed_data: Dict[str, Any]):
"""
Return a Distro forged from seed_data
:param api: Parameter is skipped.
:param seed_data: Data to seed the object with.
:returns: The created object.
"""
return image.Image(self.api, **seed_data)
def remove(
self,
name: str,
with_delete: bool = True,
with_sync: bool = True,
with_triggers: bool = True,
recursive: bool = True,
) -> None:
"""
Remove element named 'name' from the collection
:raises CX: In case object does not exist or it would orhan a system.
"""
# NOTE: with_delete isn't currently meaningful for repos but is left in for consistency in the API. Unused.
obj = self.listing.get(name, None)
if obj is None:
raise CX(f"cannot delete an object that does not exist: {name}")
# first see if any Groups use this distro
if not recursive:
for system in self.api.systems():
if system.image == name:
raise CX(f"removal would orphan system: {system.name}")
if recursive:
kids = self.api.find_system(return_list=True, image=obj.name)
if kids is None:
kids = []
if not isinstance(kids, list):
raise ValueError("Expected list or None from find_items!")
for k in kids:
self.api.remove_system(k, recursive=True)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/image/pre/*", []
)
if with_sync:
lite_sync = self.api.get_sync()
lite_sync.remove_single_image(obj)
with self.lock:
self.remove_from_indexes(obj)
del self.listing[name]
self.collection_mgr.serialize_delete(self, obj)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/image/post/*", []
)
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/change/*", []
)
| 3,222
|
Python
|
.py
| 81
| 30.111111
| 115
| 0.604547
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,111
|
collection.py
|
cobbler_cobbler/cobbler/cobbler_collections/collection.py
|
"""
This module contains the code for the abstract base collection that powers all the other collections.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import logging
import os
import time
from abc import abstractmethod
from threading import Lock
from typing import (
TYPE_CHECKING,
Any,
Dict,
Generic,
Iterator,
List,
Optional,
TypeVar,
Union,
)
from cobbler import enums, utils
from cobbler.cexceptions import CX
from cobbler.items import distro, image, menu, profile, repo, system
from cobbler.items.abstract.base_item import BaseItem
from cobbler.items.abstract.inheritable_item import InheritableItem
if TYPE_CHECKING:
from cobbler.actions.sync import CobblerSync
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.manager import CollectionManager
ITEM = TypeVar("ITEM", bound=BaseItem)
FIND_KWARGS = Union[ # pylint: disable=invalid-name
str, int, bool, Dict[Any, Any], List[Any]
]
class Collection(Generic[ITEM]):
"""
Base class for any serializable list of things.
"""
def __init__(self, collection_mgr: "CollectionManager"):
"""
Constructor.
:param collection_mgr: The collection manager to resolve all information with.
"""
self.collection_mgr = collection_mgr
self.listing: Dict[str, ITEM] = {}
self.api = self.collection_mgr.api
self.__lite_sync: Optional["CobblerSync"] = None
self.lock: Lock = Lock()
self._inmemory: bool = not self.api.settings().lazy_start
self._deserialize_running: bool = False
# Secondary indexes for the collection.
# Only unique indexes on string values are supported.
# Empty strings are not indexed.
self.indexes: Dict[str, Dict[str, str]] = {"uid": {}}
self.logger = logging.getLogger()
def __iter__(self) -> Iterator[ITEM]:
"""
Iterator for the collection. Allows list comprehensions, etc.
"""
for obj in list(self.listing.values()):
yield obj
def __len__(self) -> int:
"""
Returns size of the collection.
"""
return len(list(self.listing.values()))
@property
def lite_sync(self) -> "CobblerSync":
"""
Provide a ready to use CobblerSync object.
:getter: Return the object that can update the filesystem state to a new one.
"""
if self.__lite_sync is None:
self.__lite_sync = self.api.get_sync()
return self.__lite_sync
@property
def inmemory(self) -> bool:
r"""
If set to ``true``, then all items of the collection are loaded into memory.
:getter: The inmemory for the collection.
:setter: The new inmemory value for the collection.
"""
return self._inmemory
@inmemory.setter
def inmemory(self, inmemory: bool):
"""
Setter for the inmemory of the collection.
:param inmemory: The new inmemory value.
"""
self._inmemory = inmemory
@property
def deserialize_running(self) -> bool:
r"""
If set to ``true``, then the collection items are currently being loaded from disk.
:getter: The deserialize_running for the collection.
:setter: The new deserialize_running value for the collection.
"""
return self._deserialize_running
@deserialize_running.setter
def deserialize_running(self, deserialize_running: bool):
"""
Setter for the deserialize_running of the collection.
:param deserialize_running: The new deserialize_running value.
"""
self._deserialize_running = deserialize_running
@abstractmethod
def factory_produce(self, api: "CobblerAPI", seed_data: Dict[str, Any]) -> ITEM:
"""
Must override in subclass. Factory_produce returns an Item object from dict.
:param api: The API to resolve all information with.
:param seed_data: Unused Parameter in the base collection.
"""
@abstractmethod
def remove(
self,
name: str,
with_delete: bool = True,
with_sync: bool = True,
with_triggers: bool = True,
recursive: bool = False,
) -> None:
"""
Remove an item from collection. This method must be overridden in any subclass.
:param name: Item Name
:param with_delete: sync and run triggers
:param with_sync: sync to server file system
:param with_triggers: run "on delete" triggers
:param recursive: recursively delete children
:returns: NotImplementedError
"""
def get(self, name: str) -> Optional[ITEM]:
"""
Return object with name in the collection
:param name: The name of the object to retrieve from the collection.
:return: The object if it exists. Otherwise, "None".
"""
return self.listing.get(name, None)
def get_names(self) -> List[str]:
"""
Return list of names in the collection.
:return: list of names in the collection.
"""
return list(self.listing)
def find(
self,
name: str = "",
return_list: bool = False,
no_errors: bool = False,
**kargs: FIND_KWARGS,
) -> Optional[Union[List[ITEM], ITEM]]:
"""
Return first object in the collection that matches all item='value' pairs passed, else return None if no objects
can be found. When return_list is set, can also return a list. Empty list would be returned instead of None in
that case.
:param name: The object name which should be found.
:param return_list: If a list should be returned or the first match.
:param no_errors: If errors which are possibly thrown while searching should be ignored or not.
:param kargs: If name is present, this is optional, otherwise this dict needs to have at least a key with
``name``. You may specify more keys to finetune the search.
:return: The first item or a list with all matches.
:raises ValueError: In case no arguments for searching were specified.
"""
matches: List[ITEM] = []
if name:
kargs["name"] = name
kargs = self.__rekey(kargs)
# no arguments is an error, so we don't return a false match
if len(kargs) == 0:
raise ValueError("calling find with no arguments")
# performance: if the only key is name we can skip the whole loop
if len(kargs) == 1 and "name" in kargs and not return_list:
try:
return self.listing.get(kargs["name"], None) # type: ignore
except Exception:
return self.listing.get(name, None)
if self.api.settings().lazy_start:
# Forced deserialization of the entire collection to prevent deadlock in the search loop
self._deserialize()
with self.lock:
result = self.find_by_indexes(kargs)
if result is not None:
matches = result
if len(kargs) > 0:
for obj in self:
if obj.inmemory and obj.find_match(kargs, no_errors=no_errors):
matches.append(obj)
if not return_list:
if len(matches) == 0:
return None
return matches[0]
return matches
SEARCH_REKEY = {
"kopts": "kernel_options",
"kopts_post": "kernel_options_post",
"inherit": "parent",
"ip": "ip_address",
"mac": "mac_address",
"virt-auto-boot": "virt_auto_boot",
"virt-file-size": "virt_file_size",
"virt-disk-driver": "virt_disk_driver",
"virt-ram": "virt_ram",
"virt-path": "virt_path",
"virt-type": "virt_type",
"virt-bridge": "virt_bridge",
"virt-cpus": "virt_cpus",
"virt-host": "virt_host",
"virt-group": "virt_group",
"dhcp-tag": "dhcp_tag",
"netboot-enabled": "netboot_enabled",
"enable_gpxe": "enable_ipxe",
"boot_loader": "boot_loaders",
}
def __rekey(self, _dict: Dict[str, Any]) -> Dict[str, Any]:
"""
Find calls from the command line ("cobbler system find") don't always match with the keys from the datastructs
and this makes them both line up without breaking compatibility with either. Thankfully we don't have a LOT to
remap.
:param _dict: The dict which should be remapped.
:return: The dict which can now be understood by the cli.
"""
new_dict: Dict[str, Any] = {}
for key in _dict.keys():
if key in self.SEARCH_REKEY:
newkey = self.SEARCH_REKEY[key]
new_dict[newkey] = _dict[key]
else:
new_dict[key] = _dict[key]
return new_dict
def to_list(self) -> List[Dict[str, Any]]:
"""
Serialize the collection
:return: All elements of the collection as a list.
"""
return [item_obj.to_dict() for item_obj in list(self.listing.values())]
def from_list(self, _list: List[Dict[str, Any]]) -> None:
"""
Create all collection object items from ``_list``.
:param _list: The list with all item dictionaries.
"""
if _list is None: # type: ignore
return
for item_dict in _list:
try:
item = self.factory_produce(self.api, item_dict)
self.add(item)
except Exception as exc:
self.logger.error(
"Error while loading a collection: %s. Skipping collection %s!",
exc,
self.collection_type(),
)
def copy(self, ref: ITEM, newname: str):
"""
Copy an object with a new name into the same collection.
:param ref: The reference to the object which should be copied.
:param newname: The new name for the copied object.
"""
copied_item: ITEM = ref.make_clone()
copied_item.ctime = time.time()
copied_item.name = newname
self.add(
copied_item,
save=True,
with_copy=True,
with_triggers=True,
with_sync=True,
check_for_duplicate_names=True,
)
def rename(
self,
ref: "ITEM",
newname: str,
with_sync: bool = True,
with_triggers: bool = True,
):
"""
Allows an object "ref" to be given a new name without affecting the rest of the object tree.
:param ref: The reference to the object which should be renamed.
:param newname: The new name for the object.
:param with_sync: If a sync should be triggered when the object is renamed.
:param with_triggers: If triggers should be run when the object is renamed.
"""
# Nothing to do when it is the same name
if newname == ref.name:
return
# Save the old name
oldname = ref.name
with self.lock:
# Delete the old item
self.collection_mgr.serialize_delete_one_item(ref)
self.remove_from_indexes(ref)
self.listing.pop(oldname)
# Change the name of the object
ref.name = newname
# Save just this item
self.collection_mgr.serialize_one_item(ref)
self.listing[newname] = ref
self.add_to_indexes(ref)
for dep_type in InheritableItem.TYPE_DEPENDENCIES[ref.COLLECTION_TYPE]:
items = self.api.find_items(
dep_type[0], {dep_type[1]: oldname}, return_list=True
)
if items is None:
continue
if not isinstance(items, list):
raise ValueError("Unexepcted return value from find_items!")
for item in items:
attr = getattr(item, "_" + dep_type[1])
if isinstance(attr, (str, BaseItem)):
setattr(item, dep_type[1], newname)
elif isinstance(attr, list):
for i, attr_val in enumerate(attr): # type: ignore
if attr_val == oldname:
attr[i] = newname
else:
raise CX(
f'Internal error, unknown attribute type {type(attr)} for "{item.name}"!'
)
self.api.get_items(item.COLLECTION_TYPE).add(
item, # type: ignore
save=True,
with_sync=with_sync,
with_triggers=with_triggers,
)
# for a repo, rename the mirror directory
if isinstance(ref, repo.Repo):
# if ref.COLLECTION_TYPE == "repo":
path = os.path.join(self.api.settings().webdir, "repo_mirror")
old_path = os.path.join(path, oldname)
if os.path.exists(old_path):
new_path = os.path.join(path, ref.name)
os.renames(old_path, new_path)
# for a distro, rename the mirror and references to it
if isinstance(ref, distro.Distro):
# if ref.COLLECTION_TYPE == "distro":
path = ref.find_distro_path() # type: ignore
# create a symlink for the new distro name
ref.link_distro() # type: ignore
# Test to see if the distro path is based directly on the name of the distro. If it is, things need to
# updated accordingly.
if os.path.exists(path) and path == str(
os.path.join(self.api.settings().webdir, "distro_mirror", ref.name)
):
newpath = os.path.join(
self.api.settings().webdir, "distro_mirror", ref.name
)
os.renames(path, newpath)
# update any reference to this path ...
distros = self.api.distros()
for distro_obj in distros:
if distro_obj.kernel.find(path) == 0:
distro_obj.kernel = distro_obj.kernel.replace(path, newpath)
distro_obj.initrd = distro_obj.initrd.replace(path, newpath)
self.collection_mgr.serialize_one_item(distro_obj)
def add(
self,
ref: ITEM,
save: bool = False,
with_copy: bool = False,
with_triggers: bool = True,
with_sync: bool = True,
quick_pxe_update: bool = False,
check_for_duplicate_names: bool = False,
) -> None:
"""
Add an object to the collection
:param ref: The reference to the object.
:param save: If this is true then the objet is persisted on the disk.
:param with_copy: Is a bit of a misnomer, but lots of internal add operations can run with "with_copy" as False.
True means a real final commit, as if entered from the command line (or basically, by a user).
With with_copy as False, the particular add call might just be being run during
deserialization, in which case extra semantics around the add don't really apply. So, in that
case, don't run any triggers and don't deal with any actual files.
:param with_sync: If a sync should be triggered when the object is renamed.
:param with_triggers: If triggers should be run when the object is added.
:param quick_pxe_update: This decides if there should be run a quick or full update after the add was done.
:param check_for_duplicate_names: If the name of an object should be unique or not.
:raises TypError: Raised in case ``ref`` is None.
:raises ValueError: Raised in case the name of ``ref`` is empty.
"""
if ref is None: # type: ignore
raise TypeError("Unable to add a None object")
ref.check_if_valid()
if save:
now = float(time.time())
if ref.ctime == 0.0:
ref.ctime = now
ref.mtime = now
# migration path for old API parameter that I've renamed.
if with_copy and not save:
save = with_copy
if not save:
# For people that aren't quite aware of the API if not saving the object, you can't run these features.
with_triggers = False
with_sync = False
# Avoid adding objects to the collection with the same name
if check_for_duplicate_names:
for item_obj in self.listing.values():
if item_obj.name == ref.name:
raise CX(
f'An object with that name "{ref.name}" exists already. Try "edit"?'
)
if ref.COLLECTION_TYPE != self.collection_type():
raise TypeError("API error: storing wrong data type in collection")
# failure of a pre trigger will prevent the object from being added
if save and with_triggers:
utils.run_triggers(
self.api,
ref,
f"/var/lib/cobbler/triggers/add/{self.collection_type()}/pre/*",
)
with self.lock:
self.listing[ref.name] = ref
self.add_to_indexes(ref)
# perform filesystem operations
if save:
# Save just this item if possible, if not, save the whole collection
self.collection_mgr.serialize_one_item(ref)
if with_sync:
if isinstance(ref, system.System):
# we don't need openvz containers to be network bootable
if ref.virt_type == enums.VirtType.OPENVZ:
ref.netboot_enabled = False
self.lite_sync.add_single_system(ref)
elif isinstance(ref, profile.Profile):
# we don't need openvz containers to be network bootable
if ref.virt_type == "openvz": # type: ignore
ref.enable_menu = False
self.lite_sync.add_single_profile(ref)
self.api.sync_systems(
systems=self.find(
"system",
return_list=True,
no_errors=False,
**{"profile": ref.name},
) # type: ignore
)
elif isinstance(ref, distro.Distro):
self.lite_sync.add_single_distro(ref)
elif isinstance(ref, image.Image):
self.lite_sync.add_single_image(ref)
elif isinstance(ref, repo.Repo):
pass
elif isinstance(ref, menu.Menu):
pass
else:
self.logger.error(
"Internal error. Object type not recognized: %s", type(ref)
)
if not with_sync and quick_pxe_update:
if isinstance(ref, system.System):
self.lite_sync.update_system_netboot_status(ref.name)
# save the tree, so if neccessary, scripts can examine it.
if with_triggers:
utils.run_triggers(
self.api, ref, "/var/lib/cobbler/triggers/change/*", []
)
utils.run_triggers(
self.api,
ref,
f"/var/lib/cobbler/triggers/add/{self.collection_type()}/post/*",
[],
)
def _deserialize(self) -> None:
"""
Loading all collection items from disk in case of lazy start.
"""
if self.inmemory or self.deserialize_running:
# Preventing infinite recursion if a collection search is required when loading item properties.
# Also prevents unnecessary looping through the collection if all items are already in memory.
return
self.deserialize_running = True
for obj_name in self.get_names():
obj = self.get(obj_name)
if obj is not None and not obj.inmemory:
obj.deserialize()
self.inmemory = True
self.deserialize_running = False
def add_to_indexes(self, ref: ITEM) -> None:
"""
Add indexes for the object.
:param ref: The reference to the object whose indexes are updated.
"""
indx_dict = self.indexes["uid"]
indx_dict[ref.uid] = ref.name
def remove_from_indexes(self, ref: ITEM) -> None:
"""
Remove index keys for the object.
:param ref: The reference to the object whose index keys are removed.
"""
indx_dict = self.indexes["uid"]
indx_dict.pop(ref.uid, None)
def find_by_indexes(self, kargs: Dict[str, Any]) -> Optional[List[ITEM]]:
"""
Searching for items in the collection by indexes.
:param kwargs: The dict to match for the items.
"""
result: Optional[List[ITEM]] = None
found_keys: List[str] = []
for key, value in kargs.items():
# fnmatch and "~" are not supported
if (
key not in self.indexes
or value[:1] == "~"
or "?" in value
or "*" in value
or "[" in value
):
continue
indx_dict = self.indexes[key]
if value in indx_dict:
if result is None:
result = []
obj = self.listing.get(indx_dict[value])
if obj is not None:
result.append(obj)
else:
self.logger.error(
'Internal error. The "%s" index is corrupted.', key
)
found_keys.append(key)
for key in found_keys:
kargs.pop(key)
return result
@staticmethod
@abstractmethod
def collection_type() -> str:
"""
Returns the string key for the name of the collection (used by serializer etc)
"""
@staticmethod
@abstractmethod
def collection_types() -> str:
"""
Returns the string key for the plural name of the collection (used by serializer)
"""
| 22,759
|
Python
|
.py
| 537
| 30.884544
| 120
| 0.571254
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,112
|
systems.py
|
cobbler_cobbler/cobbler/cobbler_collections/systems.py
|
"""
Cobbler module that at runtime holds all systems in Cobbler.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2008-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
from typing import TYPE_CHECKING, Any, Dict, Set
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.cobbler_collections import collection
from cobbler.items import network_interface, system
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.manager import CollectionManager
class Systems(collection.Collection[system.System]):
"""
Systems are hostnames/MACs/IP names and the associated profile
they belong to.
"""
def __init__(self, collection_mgr: "CollectionManager"):
"""
Constructor.
:param collection_mgr: The collection manager to resolve all information with.
"""
super().__init__(collection_mgr)
self.indexes: Dict[str, Dict[str, str]] = {
"uid": {},
"mac_address": {},
"ip_address": {},
"ipv6_address": {},
"dns_name": {},
}
settings = self.api.settings()
self.disabled_indexes: Dict[str, bool] = {
"mac_address": settings.allow_duplicate_macs,
"ip_address": settings.allow_duplicate_ips,
"ipv6_address": settings.allow_duplicate_ips,
"dns_name": settings.allow_duplicate_hostnames,
}
@staticmethod
def collection_type() -> str:
return "system"
@staticmethod
def collection_types() -> str:
return "systems"
def factory_produce(
self, api: "CobblerAPI", seed_data: Dict[str, Any]
) -> system.System:
"""
Return a System forged from seed_data
:param api: Parameter is skipped.
:param seed_data: Data to seed the object with.
:returns: The created object.
"""
return system.System(self.api, **seed_data)
def remove(
self,
name: str,
with_delete: bool = True,
with_sync: bool = True,
with_triggers: bool = True,
recursive: bool = False,
) -> None:
"""
Remove element named 'name' from the collection
:raises CX: In case the name of the object was not given.
"""
obj = self.listing.get(name, None)
if obj is None:
raise CX(f"cannot delete an object that does not exist: {name}")
if isinstance(obj, list):
# Will never happen, but we want to make mypy happy.
raise CX("Ambiguous match detected!")
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/system/pre/*", []
)
if with_sync:
lite_sync = self.api.get_sync()
lite_sync.remove_single_system(obj)
with self.lock:
self.remove_from_indexes(obj)
del self.listing[name]
self.collection_mgr.serialize_delete(self, obj)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/system/post/*", []
)
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/change/*", []
)
def add_to_indexes(self, ref: system.System) -> None:
"""
Add indexes for the system.
:param ref: The reference to the system whose indexes are updated.
"""
super().add_to_indexes(ref)
if not ref.inmemory:
return
for indx_key, indx_val in self.indexes.items():
if indx_key == "uid" or self.disabled_indexes[indx_key]:
continue
for interface in ref.interfaces.values():
if hasattr(interface, indx_key):
secondary_key = getattr(interface, indx_key)
if secondary_key is not None and secondary_key != "":
indx_val[secondary_key] = ref.name
def update_interface_index_value(
self,
interface: network_interface.NetworkInterface,
attribute_name: str,
old_value: str,
new_value: str,
) -> None:
if (
interface.system_name in self.listing
and not self.disabled_indexes[attribute_name]
and interface in self.listing[interface.system_name].interfaces.values()
):
indx_dict = self.indexes[attribute_name]
with self.lock:
if (
old_value != ""
and old_value in indx_dict
and indx_dict[old_value] == interface.system_name
):
del indx_dict[old_value]
if new_value != "":
indx_dict[new_value] = interface.system_name
def update_interfaces_indexes(
self,
ref: system.System,
new_ifaces: Dict[str, network_interface.NetworkInterface],
) -> None:
"""
Update interfaces indexes for the system.
:param ref: The reference to the system whose interfaces indexes are updated.
:param new_ifaces: The new interfaces.
"""
if ref.name not in self.listing:
return
for indx_key, indx_val in self.indexes.items():
if indx_key == "uid" or self.disabled_indexes[indx_key]:
continue
old_ifaces = ref.interfaces
old_values: Set[str] = {
getattr(x, indx_key)
for x in old_ifaces.values()
if hasattr(x, indx_key) and getattr(x, indx_key) != ""
}
new_values: Set[str] = {
getattr(x, indx_key)
for x in new_ifaces.values()
if hasattr(x, indx_key) and getattr(x, indx_key) != ""
}
with self.lock:
for value in old_values - new_values:
del indx_val[value]
for value in new_values - old_values:
indx_val[value] = ref.name
def update_interface_indexes(
self,
ref: system.System,
iface_name: str,
new_iface: network_interface.NetworkInterface,
) -> None:
"""
Update interface indexes for the system.
:param ref: The reference to the system whose interfaces indexes are updated.
:param iface_name: The new interface name.
:param new_iface: The new interface.
"""
self.update_interfaces_indexes(
ref, {**ref.interfaces, **{iface_name: new_iface}}
)
def remove_from_indexes(self, ref: system.System) -> None:
"""
Remove index keys for the system.
:param ref: The reference to the system whose index keys are removed.
"""
if not ref.inmemory:
return
super().remove_from_indexes(ref)
for indx_key, indx_val in self.indexes.items():
if indx_key == "uid" or self.disabled_indexes[indx_key]:
continue
for interface in ref.interfaces.values():
if hasattr(interface, indx_key):
indx_val.pop(getattr(interface, indx_key), None)
def remove_interface_from_indexes(self, ref: system.System, name: str) -> None:
"""
Remove index keys for the system interface.
:param ref: The reference to the system whose index keys are removed.
:param name: The reference to the system whose index keys are removed.
"""
if not ref.inmemory or name not in ref.interfaces:
return
interface = ref.interfaces[name]
with self.lock:
for indx_key, indx_val in self.indexes.items():
if indx_key == "uid" or self.disabled_indexes[indx_key]:
continue
if hasattr(interface, indx_key):
indx_val.pop(getattr(interface, indx_key), None)
| 8,226
|
Python
|
.py
| 206
| 29.004854
| 87
| 0.573844
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,113
|
distros.py
|
cobbler_cobbler/cobbler/cobbler_collections/distros.py
|
"""
Cobbler module that at runtime holds all distros in Cobbler.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import glob
import os.path
from typing import TYPE_CHECKING, Any, Dict
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.cobbler_collections import collection
from cobbler.items import distro
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Distros(collection.Collection[distro.Distro]):
"""
A distro represents a network bootable matched set of kernels and initrd files.
"""
@staticmethod
def collection_type() -> str:
return "distro"
@staticmethod
def collection_types() -> str:
return "distros"
def factory_produce(
self, api: "CobblerAPI", seed_data: Dict[str, Any]
) -> "distro.Distro":
"""
Return a Distro forged from seed_data
:param api: Parameter is skipped.
:param seed_data: Data to seed the object with.
:returns: The created object.
"""
return distro.Distro(self.api, **seed_data)
def remove(
self,
name: str,
with_delete: bool = True,
with_sync: bool = True,
with_triggers: bool = True,
recursive: bool = False,
) -> None:
"""
Remove element named 'name' from the collection
:raises CX: In case any subitem (profiles or systems) would be orphaned. If the option ``recursive`` is set then
the orphaned items would be removed automatically.
"""
obj = self.listing.get(name, None)
if obj is None:
raise CX(f"cannot delete an object that does not exist: {name}")
# first see if any Groups use this distro
if not recursive:
for profile in self.api.profiles():
if profile.distro and profile.distro.name == name: # type: ignore
raise CX(f"removal would orphan profile: {profile.name}")
if recursive:
kids = self.api.find_profile(return_list=True, distro=obj.name)
if kids is None:
kids = []
if not isinstance(kids, list):
raise ValueError("find_items is expected to return a list or None!")
for k in kids:
self.api.remove_profile(
k,
recursive=recursive,
delete=with_delete,
with_triggers=with_triggers,
)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/distro/pre/*", []
)
if with_sync:
lite_sync = self.api.get_sync()
lite_sync.remove_single_distro(obj)
with self.lock:
self.remove_from_indexes(obj)
del self.listing[name]
self.collection_mgr.serialize_delete(self, obj)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/distro/post/*", []
)
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/change/*", []
)
# look through all mirrored directories and find if any directory is holding this particular distribution's
# kernel and initrd
settings = self.api.settings()
possible_storage = glob.glob(settings.webdir + "/distro_mirror/*")
path = None
kernel = obj.kernel
for storage in possible_storage:
if os.path.dirname(kernel).find(storage) != -1:
path = storage
continue
# if we found a mirrored path above, we can delete the mirrored storage /if/ no other object is using the
# same mirrored storage.
if (
with_delete
and path is not None
and os.path.exists(path)
and kernel.find(settings.webdir) != -1
):
# this distro was originally imported so we know we can clean up the associated storage as long as
# nothing else is also using this storage.
found = False
distros = self.api.distros()
for dist in distros:
if dist.kernel.find(path) != -1:
found = True
if not found:
filesystem_helpers.rmtree(path)
| 4,689
|
Python
|
.py
| 117
| 29.470085
| 120
| 0.590729
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,114
|
manager.py
|
cobbler_cobbler/cobbler/cobbler_collections/manager.py
|
"""
Repository of the Cobbler object model
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import weakref
from typing import TYPE_CHECKING, Any, Dict, cast
from cobbler import serializer, validate
from cobbler.cexceptions import CX
from cobbler.cobbler_collections.collection import Collection
from cobbler.cobbler_collections.distros import Distros
from cobbler.cobbler_collections.images import Images
from cobbler.cobbler_collections.menus import Menus
from cobbler.cobbler_collections.profiles import Profiles
from cobbler.cobbler_collections.repos import Repos
from cobbler.cobbler_collections.systems import Systems
from cobbler.settings import Settings
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.collection import ITEM
from cobbler.items.abstract.base_item import BaseItem
class CollectionManager:
"""
Manages a definitive copy of all data cobbler_collections with weakrefs pointing back into the class so they can
understand each other's contents.
"""
has_loaded = False
__shared_state: Dict[str, Any] = {}
def __init__(self, api: "CobblerAPI") -> None:
"""
Constructor which loads all content if this action was not performed before.
"""
self.__dict__ = CollectionManager.__shared_state
if not CollectionManager.has_loaded:
self.__load(api)
def __load(self, api: "CobblerAPI") -> None:
"""
Load all collections from the disk into Cobbler.
:param api: The api to resolve information with.
"""
CollectionManager.has_loaded = True
self.api = api
self.__serializer = serializer.Serializer(api)
self._distros = Distros(weakref.proxy(self))
self._repos = Repos(weakref.proxy(self))
self._profiles = Profiles(weakref.proxy(self))
self._systems = Systems(weakref.proxy(self))
self._images = Images(weakref.proxy(self))
self._menus = Menus(weakref.proxy(self))
def distros(self) -> Distros:
"""
Return the definitive copy of the Distros collection
"""
return self._distros
def profiles(self) -> Profiles:
"""
Return the definitive copy of the Profiles collection
"""
return self._profiles
def systems(self) -> Systems:
"""
Return the definitive copy of the Systems collection
"""
return self._systems
def settings(self) -> "Settings":
"""
Return the definitive copy of the application settings
"""
return self.api.settings()
def repos(self) -> Repos:
"""
Return the definitive copy of the Repos collection
"""
return self._repos
def images(self) -> Images:
"""
Return the definitive copy of the Images collection
"""
return self._images
def menus(self) -> Menus:
"""
Return the definitive copy of the Menus collection
"""
return self._menus
def serialize(self) -> None:
"""
Save all cobbler_collections to disk
"""
self.__serializer.serialize(self._distros)
self.__serializer.serialize(self._repos)
self.__serializer.serialize(self._profiles)
self.__serializer.serialize(self._images)
self.__serializer.serialize(self._systems)
self.__serializer.serialize(self._menus)
def serialize_one_item(self, item: "BaseItem") -> None:
"""
Save a collection item to disk
:param item: collection item
"""
collection = self.get_items(item.COLLECTION_TYPE)
self.__serializer.serialize_item(collection, item)
def serialize_item(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
"""
Save a collection item to disk
Deprecated - Use above serialize_one_item function instead
collection param can be retrieved
:param collection: Collection
:param item: collection item
"""
self.__serializer.serialize_item(collection, item)
def serialize_delete_one_item(self, item: "BaseItem") -> None:
"""
Save a collection item to disk
:param item: collection item
"""
collection = self.get_items(item.COLLECTION_TYPE)
self.__serializer.serialize_delete(collection, item)
def serialize_delete(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
"""
Delete a collection item from disk
:param collection: collection
:param item: collection item
"""
self.__serializer.serialize_delete(collection, item)
def deserialize(self) -> None:
"""
Load all cobbler_collections from disk
:raises CX: if there is an error in deserialization
"""
for args in (
(self._menus, True),
(self._distros, False),
(self._repos, False),
(self._profiles, True),
(self._images, False),
(self._systems, False),
):
try:
cast_collection = cast(Collection["BaseItem"], args[0])
self.__serializer.deserialize(
collection=cast_collection, topological=args[1]
)
except Exception as error:
raise CX(
f"serializer: error loading collection {args[0].collection_type()}: {error}."
f"Check your settings!"
) from error
def deserialize_one_item(self, obj: "BaseItem") -> Dict[str, Any]:
"""
Load a collection item from disk
:param obj: collection item
"""
collection_type = self.get_items(obj.COLLECTION_TYPE).collection_types()
return self.__serializer.deserialize_item(collection_type, obj.name)
def get_items(self, collection_type: str) -> "Collection[BaseItem]":
"""
Get a full collection of a single type.
Valid Values vor ``collection_type`` are: "distro", "profile", "repo", "image", "menu" and "settings".
:param collection_type: The type of collection to return.
:return: The collection if ``collection_type`` is valid.
:raises CX: If the ``collection_type`` is invalid.
"""
if validate.validate_obj_type(collection_type) and hasattr(
self, f"_{collection_type}s"
):
result = getattr(self, f"_{collection_type}s")
else:
raise CX(
f'internal error, collection name "{collection_type}" not supported'
)
return result
| 6,835
|
Python
|
.py
| 172
| 31.273256
| 116
| 0.634676
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,115
|
__init__.py
|
cobbler_cobbler/cobbler/cobbler_collections/__init__.py
|
"""
The collections have the responsibility of ensuring the relational validity of the data present in Cobbler. Further they
hold the data at runtime.
"""
| 155
|
Python
|
.py
| 4
| 37.75
| 120
| 0.807947
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,116
|
profiles.py
|
cobbler_cobbler/cobbler/cobbler_collections/profiles.py
|
"""
Cobbler module that at runtime holds all profiles in Cobbler.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
from typing import TYPE_CHECKING, Any, Dict
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.cobbler_collections import collection
from cobbler.items import profile
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Profiles(collection.Collection[profile.Profile]):
"""
A profile represents a distro paired with an automatic OS installation template file.
"""
@staticmethod
def collection_type() -> str:
return "profile"
@staticmethod
def collection_types() -> str:
return "profiles"
def factory_produce(self, api: "CobblerAPI", seed_data: Dict[Any, Any]):
"""
Return a Distro forged from seed_data
"""
return profile.Profile(self.api, **seed_data)
def remove(
self,
name: str,
with_delete: bool = True,
with_sync: bool = True,
with_triggers: bool = True,
recursive: bool = False,
):
"""
Remove element named 'name' from the collection
:raises CX: In case the name of the object was not given or any other descendant would be orphaned.
"""
if not recursive:
for system in self.api.systems():
if system.profile == name:
raise CX(f"removal would orphan system: {system.name}")
obj = self.listing.get(name, None)
if obj is None:
raise CX(f"cannot delete an object that does not exist: {name}")
if isinstance(obj, list):
# Will never happen, but we want to make mypy happy.
raise CX("Ambiguous match detected!")
if recursive:
kids = obj.descendants
kids.sort(key=lambda x: -x.depth)
for k in kids:
self.api.remove_item(
k.COLLECTION_TYPE,
k,
recursive=False,
delete=with_delete,
with_triggers=with_triggers,
)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/profile/pre/*", []
)
with self.lock:
self.remove_from_indexes(obj)
del self.listing[name]
self.collection_mgr.serialize_delete(self, obj)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/profile/post/*", []
)
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/change/*", []
)
if with_sync:
lite_sync = self.api.get_sync()
lite_sync.remove_single_profile(obj)
| 3,079
|
Python
|
.py
| 81
| 27.617284
| 107
| 0.585374
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,117
|
menus.py
|
cobbler_cobbler/cobbler/cobbler_collections/menus.py
|
"""
Cobbler module that at runtime holds all menus in Cobbler.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2021 Yuriy Chelpanov <yuriy.chelpanov@gmail.com>
from typing import TYPE_CHECKING, Any, Dict
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.cobbler_collections import collection
from cobbler.items import menu
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Menus(collection.Collection[menu.Menu]):
"""
A menu represents an element of the hierarchical boot menu.
"""
@staticmethod
def collection_type() -> str:
return "menu"
@staticmethod
def collection_types() -> str:
return "menus"
def factory_produce(
self, api: "CobblerAPI", seed_data: Dict[str, Any]
) -> menu.Menu:
"""
Return a Menu forged from seed_data
:param api: Parameter is skipped.
:param seed_data: Data to seed the object with.
:returns: The created object.
"""
return menu.Menu(self.api, **seed_data)
def remove(
self,
name: str,
with_delete: bool = True,
with_sync: bool = True,
with_triggers: bool = True,
recursive: bool = False,
) -> None:
"""
Remove element named 'name' from the collection
:param name: The name of the menu
:param with_delete: In case the deletion triggers are executed for this menu.
:param with_sync: In case a Cobbler Sync should be executed after the action.
:param with_triggers: In case the Cobbler Trigger mechanism should be executed.
:param recursive: In case you want to delete all objects this menu references.
:raises CX: Raised in case you want to delete a none existing menu.
"""
obj = self.find(name=name)
if obj is None or not isinstance(obj, menu.Menu):
raise CX(f"cannot delete an object that does not exist: {name}")
for item_type in ["image", "profile"]:
items = self.api.find_items(item_type, {"menu": obj.name}, return_list=True)
if items is None:
continue
if not isinstance(items, list):
raise ValueError("Expected list or None from find_items!")
for item in items:
item.menu = ""
if recursive:
kids = obj.descendants
kids.sort(key=lambda x: -x.depth)
for k in kids:
self.api.remove_item(
k.COLLECTION_TYPE,
k,
recursive=False,
delete=with_delete,
with_triggers=with_triggers,
)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/menu/pre/*", []
)
with self.lock:
self.remove_from_indexes(obj)
del self.listing[name]
self.collection_mgr.serialize_delete(self, obj)
if with_delete:
if with_triggers:
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/delete/menu/post/*", []
)
utils.run_triggers(
self.api, obj, "/var/lib/cobbler/triggers/change/*", []
)
if with_sync:
lite_sync = self.api.get_sync()
lite_sync.remove_single_menu()
| 3,542
|
Python
|
.py
| 91
| 28.582418
| 88
| 0.585685
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,118
|
input_converters.py
|
cobbler_cobbler/cobbler/utils/input_converters.py
|
"""
TODO
"""
import shlex
from typing import Any, Dict, List, Optional, Union
from cobbler import enums
def input_string_or_list_no_inherit(
options: Optional[Union[str, List[Any]]]
) -> List[Any]:
"""
Accepts a delimited list of stuff or a list, but always returns a list.
:param options: The object to split into a list.
:return: If ``option`` is ``delete``, ``None`` (object not literal) or an empty str, then an empty list is returned.
Otherwise, this function tries to return the arg option or tries to split it into a list.
:raises TypeError: In case the type of ``options`` was neither ``None``, str or list.
"""
if not options or options == "delete":
return []
if isinstance(options, list):
return options
if isinstance(options, str): # type: ignore
tokens = shlex.split(options)
return tokens
raise TypeError("invalid input type")
def input_string_or_list(
options: Optional[Union[str, List[Any]]]
) -> Union[List[Any], str]:
"""
Accepts a delimited list of stuff or a list, but always returns a list.
:param options: The object to split into a list.
:return: str when this functions get's passed ``<<inherit>>``. if option is delete then an empty list is returned.
Otherwise, this function tries to return the arg option or tries to split it into a list.
:raises TypeError: In case the type of ``options`` was neither ``None``, str or list.
"""
if options == enums.VALUE_INHERITED:
return enums.VALUE_INHERITED
return input_string_or_list_no_inherit(options)
def input_string_or_dict(
options: Union[str, List[Any], Dict[Any, Any]], allow_multiples: bool = True
) -> Union[str, Dict[Any, Any]]:
"""
Older Cobbler files stored configurations in a flat way, such that all values for strings. Newer versions of Cobbler
allow dictionaries. This function is used to allow loading of older value formats so new users of Cobbler aren't
broken in an upgrade.
:param options: The str or dict to convert.
:param allow_multiples: True (default) to allow multiple identical keys, otherwise set this false explicitly.
:return: A dict or the value ``<<inherit>>`` in case it is the only content of ``options``.
:raises TypeError: Raised in case the input type is wrong.
"""
if options == enums.VALUE_INHERITED:
return enums.VALUE_INHERITED
return input_string_or_dict_no_inherit(options, allow_multiples)
def input_string_or_dict_no_inherit(
options: Union[str, List[Any], Dict[Any, Any]], allow_multiples: bool = True
) -> Dict[Any, Any]:
"""
See :meth:`~cobbler.utils.input_converters.input_string_or_dict`
"""
if options is None or options == "delete": # type: ignore[reportUnnecessaryComparison]
return {}
if isinstance(options, list):
raise TypeError(f"No idea what to do with list: {options}")
if isinstance(options, str):
new_dict: Dict[str, Any] = {}
tokens = shlex.split(options)
for token in tokens:
tokens2 = token.split("=", 1)
if len(tokens2) == 1:
# this is a singleton option, no value
key = tokens2[0]
value = None
else:
key = tokens2[0]
value = tokens2[1]
# If we're allowing multiple values for the same key, check to see if this token has already been inserted
# into the dictionary of values already.
if key in new_dict and allow_multiples:
# If so, check to see if there is already a list of values otherwise convert the dictionary value to an
# array, and add the new value to the end of the list.
if isinstance(new_dict[key], list):
new_dict[key].append(value)
else:
new_dict[key] = [new_dict[key], value]
else:
new_dict[key] = value
# make sure we have no empty entries
new_dict.pop("", None)
return new_dict
if isinstance(options, dict): # type: ignore
options.pop("", None)
return options
raise TypeError("invalid input type")
def input_boolean(value: Union[str, bool, int]) -> bool:
"""
Convert a str to a boolean. If this is not possible or the value is false return false.
:param value: The value to convert to boolean.
:return: True if the value is in the following list, otherwise false: "true", "1", "on", "yes", "y" .
"""
if not isinstance(value, (str, bool, int)): # type: ignore[reportUnnecessaryIsInstance]
raise TypeError(
"The value handed to the input_boolean function was not convertable due to a wrong type "
f"(found: {type(value)})!"
)
value = str(value).lower()
return value in ["true", "1", "on", "yes", "y"]
def input_int(value: Union[str, int, float]) -> int:
"""
Convert a value to integer.
:param value: The value to convert.
:raises TypeError: In case after the attempted conversion we still don't have an int.
:return: The integer if the conversion was successful.
"""
error_message = "value must be convertable to type int."
if isinstance(value, (str, float)):
try:
converted_value = int(value)
except ValueError as value_error:
raise TypeError(error_message) from value_error
if not isinstance(converted_value, int): # type: ignore
raise TypeError(error_message)
return converted_value
if isinstance(value, bool):
return int(value)
if isinstance(value, int): # type: ignore
return value
raise TypeError(error_message)
| 5,796
|
Python
|
.py
| 126
| 38.468254
| 120
| 0.646132
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,119
|
signatures.py
|
cobbler_cobbler/cobbler/utils/signatures.py
|
"""
TODO
"""
import json
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
from cobbler import utils
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
from cobbler.items.image import Image
signature_cache: Dict[str, Any] = {}
def get_supported_distro_boot_loaders(
item: Union["Distro", "Image"], api_handle: Optional["CobblerAPI"] = None
) -> List[str]:
"""
This is trying to return you the list of known bootloaders if all resorts fail. Otherwise this returns a list which
contains only the subset of bootloaders which are available by the distro in the argument.
:param distro: The distro to check for.
:param api_handle: The api instance to resolve metadata and settings from.
:return: The list of bootloaders or a dict of well known bootloaders.
"""
try:
# Try to read from the signature
if api_handle is not None:
return api_handle.get_signatures()["breeds"][item.breed][item.os_version][
"boot_loaders"
][item.arch.value]
raise Exception("Fall through to cache for signatures!")
except Exception:
try:
# Try to read directly from the cache
return signature_cache["breeds"][item.breed][item.os_version][
"boot_loaders"
][item.arch.value]
except Exception:
try:
well_known_defaults = {
"ppc": ["grub", "pxe"],
"ppc64": ["grub", "pxe"],
"ppc64le": ["grub", "pxe"],
"ppc64el": ["grub", "pxe"],
"aarch64": ["grub"],
"i386": ["grub", "pxe", "ipxe"],
"x86_64": ["grub", "pxe", "ipxe"],
}
# Else use some well-known defaults
return well_known_defaults[item.arch.value]
except Exception:
# Else return the globally known list
return utils.get_supported_system_boot_loaders()
def load_signatures(filename: str, cache: bool = True) -> None:
"""
Loads the import signatures for distros.
:param filename: Loads the file with the given name.
:param cache: If the cache should be set with the newly read data.
"""
# Signature cache is module wide and thus requires global
global signature_cache # pylint: disable=global-statement,invalid-name
with open(filename, "r", encoding="UTF-8") as signature_file_fd:
sigjson = signature_file_fd.read()
sigdata = json.loads(sigjson)
if cache:
signature_cache = sigdata
def get_valid_breeds() -> List[str]:
"""
Return a list of valid breeds found in the import signatures
"""
if "breeds" in signature_cache:
return list(signature_cache["breeds"].keys())
return []
def get_valid_os_versions_for_breed(breed: str) -> List[str]:
"""
Return a list of valid os-versions for the given breed
:param breed: The operating system breed to check for.
:return: All operating system version which are known to Cobbler according to the signature cache filtered by a
os-breed.
"""
os_versions = []
if breed in get_valid_breeds():
os_versions = list(signature_cache["breeds"][breed].keys())
return os_versions
def get_valid_os_versions() -> List[str]:
"""
Return a list of valid os-versions found in the import signatures
:return: All operating system versions which are known to Cobbler according to the signature cache.
"""
os_versions: List[str] = []
try:
for breed in get_valid_breeds():
os_versions.extend(list(signature_cache["breeds"][breed].keys()))
except Exception:
pass
return utils.uniquify(os_versions)
def get_valid_archs() -> List[str]:
"""
Return a list of valid architectures found in the import signatures
:return: All architectures which are known to Cobbler according to the signature cache.
"""
archs: List[str] = []
try:
for breed in get_valid_breeds():
for operating_system in list(signature_cache["breeds"][breed].keys()):
archs += signature_cache["breeds"][breed][operating_system][
"supported_arches"
]
except Exception:
pass
return utils.uniquify(archs)
| 4,433
|
Python
|
.py
| 108
| 32.861111
| 119
| 0.62901
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,120
|
event.py
|
cobbler_cobbler/cobbler/utils/event.py
|
"""
This module contains logic to support the events Cobbler generates in its XML-RPC API.
"""
import time
import uuid
from typing import List, Union
from cobbler import enums
class CobblerEvent:
"""
This is a small helper class that represents an event in Cobbler.
"""
def __init__(self, name: str = "", statetime: float = 0.0) -> None:
"""
Default Constructor that initializes the event id.
:param name: The human-readable name of the event
:statetime: The time the event was created.
"""
self.__event_id = ""
self.statetime = statetime
self.__name = name
self.state = enums.EventStatus.INFO
self.read_by_who: List[str] = []
# Initialize the even_id
self.__generate_event_id()
def __len__(self) -> int:
return len(self.__members())
def __getitem__(self, idx: int) -> Union[str, List[str], float]:
return self.__members()[idx]
def __members(self) -> List[Union[str, float, List[str]]]:
"""
Lists the members with their current values.
:returns: This converts all members to scalar types that can be passed via XML-RPC.
"""
return [self.statetime, self.name, self.state.value, self.read_by_who]
@property
def event_id(self) -> str:
"""
Read only property to retrieve the internal ID of the event.
"""
return self.__event_id
@property
def name(self) -> str:
"""
Read only property to retrieve the human-readable name of the event.
"""
return self.__name
def __generate_event_id(self) -> None:
"""
Generate an event id based on the current timestamp
:return: An id in the format: "<4 digit year>-<2 digit month>-<two digit day>_<2 digit hour><2 digit minute>
<2 digit second>_<optional string>"
"""
(
year,
month,
day,
hour,
minute,
second,
_,
_,
_,
) = time.localtime()
task_uuid = uuid.uuid4().hex
self.__event_id = f"{year:04d}-{month:02d}-{day:02d}_{hour:02d}{minute:02d}{second:02d}_{self.name}_{task_uuid}"
| 2,278
|
Python
|
.py
| 65
| 26.861538
| 120
| 0.577535
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,121
|
thread.py
|
cobbler_cobbler/cobbler/utils/thread.py
|
"""
This module is responsible for managing the custom common threading logic Cobbler has.
"""
import logging
import pathlib
from threading import Thread
from typing import TYPE_CHECKING, Any, Callable, Dict, Optional
from cobbler import enums, utils
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.remote import CobblerXMLRPCInterface
class CobblerThread(Thread):
"""
This is a custom thread that has a custom logger as well as logic to execute Cobbler triggers.
"""
def __init__(
self,
event_id: str,
remote: "CobblerXMLRPCInterface",
options: Dict[str, Any],
task_name: str,
api: "CobblerAPI",
run: Callable[["CobblerThread"], None],
on_done: Optional[Callable[["CobblerThread"], None]] = None,
):
"""
This constructor creates a Cobbler thread which then may be run by calling ``run()``.
:param event_id: The event-id which is associated with this thread. Also used as thread name
:param remote: The Cobbler remote object to execute actions with.
:param options: Additional options which can be passed into the Thread.
:param task_name: The high level task name which is used to trigger pre- and post-task triggers
:param api: The Cobbler api object to resolve information with.
:param run: The callable that is going to be executed with this thread.
:param on_done: An optional callable that is going to be executed after ``run`` but before the triggers.
"""
super().__init__(name=event_id)
self.event_id = event_id
self.remote = remote
self.logger = logging.getLogger()
self.__task_log_handler: Optional[logging.FileHandler] = None
self.__setup_logger()
self._run = run
self.on_done = on_done
self.options = options
self.task_name = task_name
self.api = api
def __setup_logger(self):
"""
Utility function that will set up the Python logger for the tasks in a special directory.
"""
filename = pathlib.Path("/var/log/cobbler/tasks") / f"{self.event_id}.log"
self.__task_log_handler = logging.FileHandler(str(filename), encoding="utf-8")
task_log_formatter = logging.Formatter(
"[%(threadName)s] %(asctime)s - %(levelname)s | %(message)s"
)
self.__task_log_handler.setFormatter(task_log_formatter)
self.logger.setLevel(logging.INFO)
self.logger.addHandler(self.__task_log_handler)
def _set_task_state(self, new_state: enums.EventStatus):
"""
Set the state of the task. (For internal use only)
:param new_state: The new state of the task.
"""
if not isinstance(new_state, enums.EventStatus): # type: ignore
raise TypeError('"new_state" needs to be of type enums.EventStatus!')
if self.event_id not in self.remote.events:
raise ValueError('"event_id" not existing!')
self.remote.events[self.event_id].state = new_state
# clear the list of who has read it
self.remote.events[self.event_id].read_by_who = []
if new_state == enums.EventStatus.COMPLETE:
self.logger.info("### TASK COMPLETE ###")
elif new_state == enums.EventStatus.FAILED:
self.logger.error("### TASK FAILED ###")
def run(self) -> None:
"""
Run the thread.
:return: The return code of the action. This may a boolean or a Linux return code.
"""
self.logger.info("start_task(%s); event_id(%s)", self.task_name, self.event_id)
lock = "load_items_lock" in self.options and self.task_name != "load_items"
if lock:
# Shared lock to suspend execution of _background_load_items
self.options["load_items_lock"].acquire(blocking=False)
try:
if utils.run_triggers(
api=self.api,
globber=f"/var/lib/cobbler/triggers/task/{self.task_name}/pre/*",
additional=self.options if isinstance(self.options, list) else [],
):
self._set_task_state(enums.EventStatus.FAILED)
return
return_code = self._run(self)
if return_code is not None and not return_code:
self._set_task_state(enums.EventStatus.FAILED)
else:
self._set_task_state(enums.EventStatus.COMPLETE)
if self.on_done is not None:
self.on_done(self)
utils.run_triggers(
api=self.api,
globber=f"/var/lib/cobbler/triggers/task/{self.task_name}/post/*",
additional=self.options if isinstance(self.options, list) else [],
)
return return_code
except Exception:
utils.log_exc()
self._set_task_state(enums.EventStatus.FAILED)
return
finally:
if lock:
self.options["load_items_lock"].release()
if self.__task_log_handler is not None:
self.logger.removeHandler(self.__task_log_handler)
| 5,217
|
Python
|
.py
| 114
| 35.815789
| 112
| 0.617168
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,122
|
__init__.py
|
cobbler_cobbler/cobbler/utils/__init__.py
|
"""
Misc heavy lifting functions for Cobbler
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import contextlib
import fcntl
import glob
import logging
import os
import random
import re
import shutil
import subprocess
import sys
import traceback
import urllib.error
import urllib.parse
import urllib.request
from functools import reduce
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
Dict,
Hashable,
List,
Optional,
Pattern,
Tuple,
Union,
)
import distro
from netaddr.ip import IPAddress, IPNetwork
from cobbler import enums, settings
from cobbler.cexceptions import CX
from cobbler.utils import process_management
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.collection import ITEM
from cobbler.items.abstract.bootable_item import BootableItem
from cobbler.settings import Settings
CHEETAH_ERROR_DISCLAIMER = """
# *** ERROR ***
#
# There is a templating error preventing this file from rendering correctly.
#
# This is most likely not due to a bug in Cobbler and is something you can fix.
#
# Look at the message below to see what things are causing problems.
#
# (1) Does the template file reference a $variable that is not defined?
# (2) is there a formatting error in a Cheetah directive?
# (3) Should dollar signs ($) be escaped that are not being escaped?
#
# Try fixing the problem and then investigate to see if this message goes
# away or changes.
#
"""
re_kernel = re.compile(
r"(vmlinu[xz]|(kernel|linux(\.img)?)|pxeboot\.n12|wimboot|mboot\.c32|tboot\.b00|b\.b00|.+\.kernel)"
)
re_initrd = re.compile(r"(initrd(.*)\.img|ramdisk\.image\.gz|boot\.sdi|imgpayld\.tgz)")
# all logging from utils.die goes to the main log even if there is another log.
# logging.getLogger is not annotated fully according to pylance.
logger = logging.getLogger() # type: ignore
def die(msg: str) -> None:
"""
This method let's Cobbler crash with an exception. Log the exception once in the per-task log or the main log if
this is not a background op.
:param msg: The message to send for raising the exception
:raises CX: Raised in all cases with ``msg``.
"""
# log the exception once in the per-task log or the main log if this is not a background op.
try:
raise CX(msg)
except CX:
log_exc()
# now re-raise it so the error can fail the operation
raise CX(msg)
def log_exc() -> None:
"""
Log an exception.
"""
(exception_type, exception_value, exception_traceback) = sys.exc_info()
logger.info("Exception occurred: %s", exception_type)
logger.info("Exception value: %s", exception_value)
logger.info(
"Exception Info:\n%s",
"\n".join(traceback.format_list(traceback.extract_tb(exception_traceback))),
)
def get_exc(exc: Exception, full: bool = True) -> str:
"""
This tries to analyze if an exception comes from Cobbler and potentially enriches or shortens the exception.
:param exc: The exception which should be analyzed.
:param full: If the full exception should be returned or only the most important information.
:return: The exception which has been converted into a string which then can be logged easily.
"""
(exec_type, exec_value, trace) = sys.exc_info()
buf = ""
try:
getattr(exc, "from_cobbler")
buf = str(exc)[1:-1] + "\n"
except Exception:
if not full:
buf += str(exec_type)
buf = f"{buf}\n{exec_value}"
if full:
buf += "\n" + "\n".join(traceback.format_list(traceback.extract_tb(trace)))
return buf
def cheetah_exc(exc: Exception) -> str:
"""
Converts an exception thrown by Cheetah3 into a custom error message.
:param exc: The exception to convert.
:return: The string representation of the Cheetah3 exception.
"""
lines = get_exc(exc).split("\n")
buf = ""
for line in lines:
buf += f"# {line}\n"
return CHEETAH_ERROR_DISCLAIMER + buf
def pretty_hex(ip_address: IPAddress, length: int = 8) -> str:
"""
Pads an IP object with leading zeroes so that the result is _length_ hex digits. Also do an upper().
:param ip_address: The IP address to pretty print.
:param length: The length of the resulting hexstring. If the number is smaller than the resulting hex-string
then no front-padding is done.
"""
hexval = f"{ip_address.value:x}"
if len(hexval) < length:
hexval = "0" * (length - len(hexval)) + hexval
return hexval.upper()
def get_host_ip(ip_address: str, shorten: bool = True) -> str:
"""
Return the IP encoding needed for the TFTP boot tree.
:param ip_address: The IP address to pretty print.
:param shorten: Whether the IP-Address should be shortened or not.
:return: The IP encoded as a hexadecimal value.
"""
ip_address_obj = IPAddress(ip_address)
cidr = IPNetwork(ip_address)
if len(cidr) == 1: # Just an IP, e.g. a /32
return pretty_hex(ip_address_obj)
pretty = pretty_hex(cidr[0])
if not shorten or len(cidr) <= 8:
# not enough to make the last nibble insignificant
return pretty
cutoff = (32 - cidr.prefixlen) // 4
return pretty[0:-cutoff]
def _IP(ip_address: Union[str, IPAddress]) -> IPAddress:
"""
Returns a netaddr.IP object representing an ip.
If ip is already an netaddr.IP instance just return it.
Else return a new instance
"""
if isinstance(ip_address, IPAddress):
return ip_address
return IPAddress(ip_address)
def is_ip(strdata: str) -> bool:
"""
Return whether the argument is an IP address.
:param strdata: The IP in a string format. This get's passed to the IP object of Python.
"""
try:
_IP(strdata)
except Exception:
return False
return True
def get_random_mac(api_handle: "CobblerAPI", virt_type: str = "kvm") -> str:
"""
Generate a random MAC address.
The code of this method was taken from xend/server/netif.py
:param api_handle: The main Cobbler api instance.
:param virt_type: The virtualization provider. Currently possible is 'vmware', 'xen', 'qemu', 'kvm'.
:returns: MAC address string
:raises CX: Raised in case unsupported ``virt_type`` given.
"""
if virt_type.startswith("vmware"):
mac = [
0x00,
0x50,
0x56,
random.randint(0x00, 0x3F),
random.randint(0x00, 0xFF),
random.randint(0x00, 0xFF),
]
elif (
virt_type.startswith("xen")
or virt_type.startswith("qemu")
or virt_type.startswith("kvm")
):
mac = [
0x00,
0x16,
0x3E,
random.randint(0x00, 0x7F),
random.randint(0x00, 0xFF),
random.randint(0x00, 0xFF),
]
else:
raise CX("virt mac assignment not yet supported")
result = ":".join([f"{x:02x}" for x in mac])
systems = api_handle.systems()
while systems.find(mac_address=result):
result = get_random_mac(api_handle)
return result
def find_matching_files(directory: str, regex: Pattern[str]) -> List[str]:
"""
Find all files in a given directory that match a given regex. Can't use glob directly as glob doesn't take regexen.
The search does not include subdirectories.
:param directory: The directory to search in.
:param regex: The regex to apply to the found files.
:return: An array of files which apply to the regex.
"""
files = glob.glob(os.path.join(directory, "*"))
results: List[str] = []
for file in files:
if regex.match(os.path.basename(file)):
results.append(file)
return results
def find_highest_files(directory: str, unversioned: str, regex: Pattern[str]) -> str:
"""
Find the highest numbered file (kernel or initrd numbering scheme) in a given directory that matches a given
pattern. Used for auto-booting the latest kernel in a directory.
:param directory: The directory to search in.
:param unversioned: The base filename which also acts as a last resort if no numbered files are found.
:param regex: The regex to search for.
:return: The file with the highest number or an empty string.
"""
files = find_matching_files(directory, regex)
get_numbers = re.compile(r"(\d+).(\d+).(\d+)")
def max2(first: str, second: str) -> str:
"""
Returns the larger of the two values
"""
first_match = get_numbers.search(os.path.basename(first))
second_match = get_numbers.search(os.path.basename(second))
if not (first_match and second_match):
raise ValueError("Could not detect version numbers correctly!")
first_value = first_match.groups()
second_value = second_match.groups()
if first_value > second_value:
return first
return second
if len(files) > 0:
return reduce(max2, files)
# Couldn't find a highest numbered file, but maybe there is just a 'vmlinuz' or an 'initrd.img' in this directory?
last_chance = os.path.join(directory, unversioned)
if os.path.exists(last_chance):
return last_chance
return ""
def find_kernel(path: str) -> str:
"""
Given a filename, find if the path can be made to resolve into a kernel, and return that full path if possible.
:param path: The path to check for a kernel.
:return: path if at the specified location a possible match for a kernel was found, otherwise an empty string.
"""
if not isinstance(path, str): # type: ignore
raise TypeError("path must be of type str!")
if os.path.isfile(path):
filename = os.path.basename(path)
if re_kernel.match(filename) or filename == "vmlinuz":
return path
elif os.path.isdir(path):
return find_highest_files(path, "vmlinuz", re_kernel)
# For remote URLs we expect an absolute path, and will not do any searching for the latest:
elif file_is_remote(path) and remote_file_exists(path):
return path
return ""
def remove_yum_olddata(path: Union[str, "os.PathLike[str]"]) -> None:
"""
Delete .olddata folders that might be present from a failed run of createrepo.
:param path: The path to check for .olddata files.
"""
directories_to_try = [
".olddata",
".repodata/.olddata",
"repodata/.oldata",
"repodata/repodata",
]
for pathseg in directories_to_try:
olddata = Path(path, pathseg)
if olddata.is_dir() and olddata.exists():
logger.info('Removing: "%s"', olddata)
shutil.rmtree(olddata, ignore_errors=False, onerror=None)
def find_initrd(path: str) -> Optional[str]:
"""
Given a directory or a filename, see if the path can be made to resolve into an intird, return that full path if
possible.
:param path: The path to check for initrd files.
:return: None or the path to the found initrd.
"""
# FUTURE: try to match kernel/initrd pairs?
if path is None: # type: ignore
return None
if os.path.isfile(path):
# filename = os.path.basename(path)
# if re_initrd.match(filename):
# return path
# if filename == "initrd.img" or filename == "initrd":
# return path
return path
if os.path.isdir(path):
return find_highest_files(path, "initrd.img", re_initrd)
# For remote URLs we expect an absolute path, and will not do any searching for the latest:
if file_is_remote(path) and remote_file_exists(path):
return path
return None
def read_file_contents(
file_location: str, fetch_if_remote: bool = False
) -> Optional[str]:
"""
Reads the contents of a file, which could be referenced locally or as a URI.
:param file_location: The location of the file to read.
:param fetch_if_remote: If True a remote file will be tried to read, otherwise remote files are skipped and None is
returned.
:return: Returns None if file is remote and templating of remote files is disabled.
:raises FileNotFoundError: if the file does not exist at the specified location.
"""
# Local files:
if file_location.startswith("/"):
if not os.path.exists(file_location):
logger.warning("File does not exist: %s", file_location)
raise FileNotFoundError(f"File not found: {file_location}")
try:
with open(file_location, encoding="UTF-8") as file_fd:
data = file_fd.read()
return data
except:
log_exc()
raise
# Remote files:
if not fetch_if_remote:
return None
if file_is_remote(file_location):
try:
with urllib.request.urlopen(file_location) as handler:
data = handler.read()
return data
except urllib.error.HTTPError as error:
# File likely doesn't exist
logger.warning("File does not exist: %s", file_location)
raise FileNotFoundError(f"File not found: {file_location}") from error
return None
def remote_file_exists(file_url: str) -> bool:
"""
Return True if the remote file exists.
:param file_url: The URL to check.
:return: True if Cobbler can reach the specified URL, otherwise false.
"""
try:
with urllib.request.urlopen(file_url) as _:
pass
return True
except urllib.error.HTTPError:
# File likely doesn't exist
return False
def file_is_remote(file_location: str) -> bool:
"""
Returns true if the file is remote and referenced via a protocol we support.
:param file_location: The URI to check.
:return: True if the URI is http, https or ftp. Otherwise false.
"""
file_loc_lc = file_location.lower()
# Check for urllib2 supported protocols
for prefix in ["http://", "https://", "ftp://"]:
if file_loc_lc.startswith(prefix):
return True
return False
def blender(
api_handle: "CobblerAPI", remove_dicts: bool, root_obj: "BootableItem"
) -> Dict[str, Any]:
"""
Combine all of the data in an object tree from the perspective of that point on the tree, and produce a merged
dictionary containing consolidated data.
:param api_handle: The api to use for collecting the information to blender the item.
:param remove_dicts: Boolean to decide whether dicts should be converted.
:param root_obj: The object which should act as the root-node object.
:return: A dictionary with all the information from the root node downwards.
"""
tree = root_obj.grab_tree()
tree.reverse() # start with top of tree, override going down
results: Dict[str, Any] = {}
for node in tree:
__consolidate(node, results)
# Make interfaces accessible without Cheetah-voodoo in the templates
# EXAMPLE: $ip == $ip0, $ip1, $ip2 and so on.
if root_obj.COLLECTION_TYPE == "system": # type: ignore
for name, interface in root_obj.interfaces.items(): # type: ignore
intf_dict = interface.to_dict() # type: ignore
for key in intf_dict: # type: ignore
results[f"{key}_{name}"] = intf_dict[key] # type: ignore
# If the root object is a profile or system, add in all repo data for repos that belong to the object chain
if root_obj.COLLECTION_TYPE in ("profile", "system"): # type: ignore
repo_data: List[Dict[Any, Any]] = []
for repo in results.get("repos", []):
repo = api_handle.find_repo(name=repo)
if repo and not isinstance(repo, list):
repo_data.append(repo.to_dict())
# Sorting is courtesy of https://stackoverflow.com/a/73050/4730773
results["repo_data"] = sorted(
repo_data, key=lambda repo_dict: repo_dict["priority"], reverse=True
)
http_port = results.get("http_port", 80)
if http_port in (80, "80"):
results["http_server"] = results["server"]
else:
results["http_server"] = f"{results['server']}:{http_port}"
if "children" in results:
child_names = results["children"]
results["children"] = {}
# logger.info("Children: %s", child_names)
for key in child_names:
# We use return_list=False, thus this is only Optional[ITEM]
child = api_handle.find_items("", name=key, return_list=False) # type: ignore
if child is None or isinstance(child, list):
raise ValueError(
f'Child with the name "{key}" of parent object "{root_obj.name}" did not exist!'
)
continue
results["children"][key] = child.to_dict()
# sanitize output for koan and kernel option lines, etc
if remove_dicts:
# We know we pass a dict, thus we will always get the right type!
results = flatten(results) # type: ignore
# Add in some variables for easier templating as these variables change based on object type.
if "interfaces" in results:
# is a system object
results["system_name"] = results["name"]
results["profile_name"] = results["profile"]
if "distro" in results:
results["distro_name"] = results["distro"]
elif "image" in results:
results["distro_name"] = "N/A"
results["image_name"] = results["image"]
elif "distro" in results:
# is a profile or subprofile object
results["profile_name"] = results["name"]
results["distro_name"] = results["distro"]
elif "kernel" in results:
# is a distro object
results["distro_name"] = results["name"]
elif "file" in results:
# is an image object
results["distro_name"] = "N/A"
results["image_name"] = results["name"]
return results
def flatten(data: Dict[str, Any]) -> Optional[Dict[str, Any]]:
"""
Convert certain nested dicts to strings. This is only really done for the ones koan needs as strings this should
not be done for everything
:param data: The dictionary in which various keys should be converted into a string.
:return: None (if data is None) or the flattened string.
"""
if data is None or not isinstance(data, dict): # type: ignore
return None
if "environment" in data:
data["environment"] = dict_to_string(data["environment"])
if "kernel_options" in data:
data["kernel_options"] = dict_to_string(data["kernel_options"])
if "kernel_options_post" in data:
data["kernel_options_post"] = dict_to_string(data["kernel_options_post"])
if "yumopts" in data:
data["yumopts"] = dict_to_string(data["yumopts"])
if "autoinstall_meta" in data:
data["autoinstall_meta"] = dict_to_string(data["autoinstall_meta"])
if "template_files" in data:
data["template_files"] = dict_to_string(data["template_files"])
if "repos" in data and isinstance(data["repos"], list):
data["repos"] = " ".join(data["repos"]) # type: ignore
if "rpm_list" in data and isinstance(data["rpm_list"], list):
data["rpm_list"] = " ".join(data["rpm_list"]) # type: ignore
# Note -- we do not need to flatten "interfaces" as koan does not expect it to be a string, nor do we use it on a
# kernel options line, etc...
return data
def uniquify(seq: List[Any]) -> List[Any]:
"""
Remove duplicates from the sequence handed over in the args.
:param seq: The sequence to check for duplicates.
:return: The list without duplicates.
"""
# Credit: https://www.peterbe.com/plog/uniqifiers-benchmark
# FIXME: if this is actually slower than some other way, overhaul it
# For above there is a better version: https://www.peterbe.com/plog/fastest-way-to-uniquify-a-list-in-python-3.6
seen = {}
result: List[Any] = []
for item in seq:
if item in seen:
continue
seen[item] = 1
result.append(item)
return result
def __consolidate(node: Union["ITEM", "Settings"], results: Dict[Any, Any]) -> Dict[Any, Any]: # type: ignore
"""
Merge data from a given node with the aggregate of all data from past scanned nodes. Dictionaries and arrays are
treated specially.
:param node: The object to merge data into. The data from the node always wins.
:return: A dictionary with the consolidated data.
"""
node_data = node.to_dict()
# If the node has any data items labelled <<inherit>> we need to expunge them. So that they do not override the
# supernodes.
node_data_copy: Dict[Any, Any] = {}
for key in node_data:
value = node_data[key]
if value == enums.VALUE_INHERITED:
if key not in results:
# We need to add at least one value per key, use the property getter to resolve to the
# settings or wherever we inherit from.
node_data_copy[key] = getattr(node, key)
# Old keys should have no inherit and thus are not a real property
if key == "kickstart":
node_data_copy[key] = getattr(type(node), "autoinstall").fget(node)
elif key == "ks_meta":
node_data_copy[key] = getattr(type(node), "autoinstall_meta").fget(node)
else:
if isinstance(value, dict):
node_data_copy[key] = value.copy()
elif isinstance(value, list):
node_data_copy[key] = value[:]
else:
node_data_copy[key] = value
for field, data_item in node_data_copy.items():
if field in results:
# Now merge data types separately depending on whether they are dict, list, or scalar.
fielddata = results[field]
if isinstance(fielddata, dict):
# interweave dict results
results[field].update(data_item.copy())
elif isinstance(fielddata, (list, tuple)):
# add to lists (Cobbler doesn't have many lists)
# FIXME: should probably uniquify list after doing this
results[field].extend(data_item)
results[field] = uniquify(results[field])
else:
# distro field gets special handling, since we don't want to overwrite it ever.
# FIXME: should the parent's field too? It will be overwritten if there are multiple sub-profiles in
# the chain of inheritance
if field != "distro":
results[field] = data_item
else:
results[field] = data_item
# Now if we have any "!foo" results in the list, delete corresponding key entry "foo", and also the entry "!foo",
# allowing for removal of kernel options set in a distro later in a profile, etc.
dict_removals(results, "kernel_options")
dict_removals(results, "kernel_options_post")
dict_removals(results, "autoinstall_meta")
dict_removals(results, "template_files")
return results
def dict_removals(results: Dict[Any, Any], subkey: str) -> None:
"""
Remove entries from a dictionary starting with a "!".
:param results: The dictionary to search in
:param subkey: The subkey to search through.
"""
if subkey not in results:
return
return dict_annihilate(results[subkey])
def dict_annihilate(dictionary: Dict[Any, Any]) -> None:
"""
Annihilate entries marked for removal. This method removes all entries with
key names starting with "!". If a ``dictionary`` contains keys "!xxx" and
"xxx", then both will be removed.
:param dictionary: A dictionary to clean up.
"""
for key in list(dictionary.keys()):
if str(key).startswith("!") and key != "!":
short_key = key[1:]
if short_key in dictionary:
del dictionary[short_key]
del dictionary[key]
def dict_to_string(_dict: Dict[Any, Any]) -> Union[str, Dict[Any, Any]]:
"""
Convert a dictionary to a printable string. Used primarily in the kernel options string and for some legacy stuff
where koan expects strings (though this last part should be changed to dictionaries)
A KV-Pair is joined with a "=". Values are enclosed in single quotes.
:param _dict: The dictionary to convert to a string.
:return: The string which was previously a dictionary.
"""
buffer = ""
if not isinstance(_dict, dict): # type: ignore
return _dict
for key in _dict:
value = _dict[key]
if not value:
buffer += str(key) + " "
elif isinstance(value, list):
# this value is an array, so we print out every
# key=value
item: Any
for item in value:
# strip possible leading and trailing whitespaces
_item = str(item).strip()
if " " in _item:
buffer += str(key) + "='" + _item + "' "
else:
buffer += str(key) + "=" + _item + " "
else:
_value = str(value).strip()
if " " in _value:
buffer += str(key) + "='" + _value + "' "
else:
buffer += str(key) + "=" + _value + " "
return buffer
def rsync_files(src: str, dst: str, args: str, quiet: bool = True) -> bool:
r"""
Sync files from src to dst. The extra arguments specified by args are appended to the command.
:param src: The source for the copy process.
:param dst: The destination for the copy process.
:param args: The extra arguments are appended to our standard arguments.
:param quiet: If ``True`` no progress is reported. If ``False`` then progress will be reported by rsync.
:return: ``True`` on success, otherwise ``False``.
"""
if args is None: # type: ignore
args = ""
# Make sure we put a "/" on the end of the source and destination to make sure we don't cause any rsync weirdness.
if not dst.endswith("/"):
dst = f"{dst}/"
if not src.endswith("/"):
src = f"{src}/"
spacer = ""
if not src.startswith("rsync://") and not src.startswith("/"):
spacer = ' -e "ssh" '
rsync_cmd = [
"rsync",
"-a",
spacer,
f"'{src}'",
dst,
args,
"--exclude-from=/etc/cobbler/rsync.exclude",
"--quiet" if quiet else "--progress",
]
try:
res = subprocess_call(rsync_cmd, shell=False)
if res != 0:
die(f"Failed to run the rsync command: '{rsync_cmd}'")
except Exception:
return False
return True
def run_triggers(
api: "CobblerAPI",
ref: Optional["ITEM"] = None,
globber: str = "",
additional: Optional[List[Any]] = None,
) -> None:
"""Runs all the trigger scripts in a given directory.
Example: ``/var/lib/cobbler/triggers/blah/*``
As of Cobbler 1.5.X, this also runs Cobbler modules that match the globbing paths.
Python triggers are always run before shell triggers.
:param api: The api object to use for resolving the actions.
:param ref: Can be a Cobbler object, if not None, the name will be passed to the script. If ref is None, the script
will be called with no arguments.
:param globber: is a wildcard expression indicating which triggers to run.
:param additional: Additional arguments to run the triggers with.
:raises CX: Raised in case the trigger failed.
"""
logger.debug("running python triggers from %s", globber)
modules = api.get_modules_in_category(globber)
if additional is None:
additional = []
for module in modules:
arglist: List[str] = []
if ref:
arglist.append(ref.name)
for argument in additional:
arglist.append(argument)
logger.debug("running python trigger %s", module.__name__)
return_code = module.run(api, arglist)
if return_code != 0:
raise CX(f"Cobbler trigger failed: {module.__name__}")
# Now do the old shell triggers, which are usually going to be slower, but are easier to write and support any
# language.
logger.debug("running shell triggers from %s", globber)
triggers = glob.glob(globber)
triggers.sort()
for file in triggers:
try:
if file.startswith(".") or file.find(".rpm") != -1:
# skip dotfiles or .rpmnew files that may have been installed in the triggers directory
continue
arglist = [file]
if ref:
arglist.append(ref.name)
for argument in additional:
if argument:
arglist.append(argument)
logger.debug("running shell trigger %s", file)
return_code = subprocess_call(arglist, shell=False) # close_fds=True)
except Exception:
logger.warning("failed to execute trigger: %s", file)
continue
if return_code != 0:
raise CX(
"Cobbler trigger failed: %(file)s returns %(code)d"
% {"file": file, "code": return_code}
)
logger.debug("shell trigger %s finished successfully", file)
logger.debug("shell triggers finished successfully")
def get_family() -> str:
"""
Get family of running operating system.
Family is the base Linux distribution of a Linux distribution, with a set of common parents.
:return: May be "redhat", "debian" or "suse" currently. If none of these are detected then just the distro name is
returned.
"""
# TODO: Refactor that this is purely reliant on the distro module or obsolete it.
redhat_list = (
"red hat",
"redhat",
"scientific linux",
"fedora",
"centos",
"virtuozzo",
"almalinux",
"rocky linux",
"anolis os",
"oracle linux server",
)
distro_name = distro.name().lower()
for item in redhat_list:
if item in distro_name:
return "redhat"
if "debian" in distro_name or "ubuntu" in distro_name:
return "debian"
if "suse" in distro.like():
return "suse"
return distro_name
def os_release() -> Tuple[str, float]:
"""
Get the os version of the linux distro. If the get_family() method succeeds then the result is normalized.
:return: The os-name and os version.
"""
family = get_family()
distro_name = distro.name().lower()
distro_version = distro.version()
make = "unknown"
if family == "redhat":
if "fedora" in distro_name:
make = "fedora"
elif "centos" in distro_name:
make = "centos"
elif "almalinux" in distro_name:
make = "centos"
elif "rocky linux" in distro_name:
make = "centos"
elif "anolis os" in distro_name:
make = "centos"
elif "virtuozzo" in distro_name:
make = "virtuozzo"
elif "oracle linux server" in distro_name:
make = "centos"
else:
make = "redhat"
return make, float(distro_version)
if family == "debian":
if "debian" in distro_name:
return "debian", float(distro_version)
if "ubuntu" in distro_name:
return "ubuntu", float(distro_version)
if family == "suse":
make = "suse"
if "suse" not in distro.like():
make = "unknown"
return make, float(distro_version)
return make, 0.0
def is_selinux_enabled() -> bool:
"""
This check is achieved via a subprocess call to ``selinuxenabled``. Default return is false.
:return: Whether selinux is enabled or not.
"""
if not os.path.exists("/usr/sbin/selinuxenabled"):
return False
selinuxenabled = subprocess_call(["/usr/sbin/selinuxenabled"], shell=False)
if selinuxenabled == 0:
return True
return False
def command_existing(cmd: str) -> bool:
r"""
This takes a command which should be known to the system and checks if it is available.
:param cmd: The executable to check
:return: If the binary does not exist ``False``, otherwise ``True``.
"""
# https://stackoverflow.com/a/28909933
return shutil.which(cmd) is not None
def subprocess_sp(
cmd: Union[str, List[str]], shell: bool = True, process_input: Any = None
) -> Tuple[str, int]:
"""
Call a shell process and redirect the output for internal usage.
:param cmd: The command to execute in a subprocess call.
:param shell: Whether to use a shell or not for the execution of the command.
:param process_input: If there is any input needed for that command to stdin.
:return: A tuple of the output and the return code.
"""
logger.info("running: %s", cmd)
stdin = None
if process_input:
stdin = subprocess.PIPE
try:
with subprocess.Popen(
cmd,
shell=shell,
stdin=stdin,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
close_fds=True,
) as subprocess_popen_obj:
(out, err) = subprocess_popen_obj.communicate(process_input)
return_code = subprocess_popen_obj.returncode
except OSError as os_error:
log_exc()
raise ValueError(
f"OS Error, command not found? While running: {cmd}"
) from os_error
logger.info("received on stdout: %s", out)
logger.debug("received on stderr: %s", err)
return out, return_code
def subprocess_call(
cmd: Union[str, List[str]], shell: bool = False, process_input: Any = None
) -> int:
"""
A simple subprocess call with no output capturing.
:param cmd: The command to execute.
:param shell: Whether to use a shell or not for the execution of the command.
:param process_input: If there is any process_input needed for that command to stdin.
:return: The return code of the process
"""
_, return_code = subprocess_sp(cmd, shell=shell, process_input=process_input)
return return_code
def subprocess_get(
cmd: Union[str, List[str]], shell: bool = True, process_input: Any = None
) -> str:
"""
A simple subprocess call with no return code capturing.
:param cmd: The command to execute.
:param shell: Whether to use a shell or not for the execution of the command.
:param process_input: If there is any process_input needed for that command to stdin.
:return: The data which the subprocess returns.
"""
data, _ = subprocess_sp(cmd, shell=shell, process_input=process_input)
return data
def get_supported_system_boot_loaders() -> List[str]:
"""
Return the list of currently supported bootloaders.
:return: The list of currently supported bootloaders.
"""
return ["grub", "pxe", "ipxe"]
def get_shared_secret() -> Union[str, int]:
"""
The 'web.ss' file is regenerated each time cobblerd restarts and is used to agree on shared secret interchange
between the web server and cobblerd, and also the CLI and cobblerd, when username/password access is not required.
For the CLI, this enables root users to avoid entering username/pass if on the Cobbler server.
:return: The Cobbler secret which enables full access to Cobbler.
"""
try:
with open("/var/lib/cobbler/web.ss", "rb", encoding="utf-8") as web_secret_fd:
data = web_secret_fd.read()
except Exception:
return -1
return str(data).strip()
def local_get_cobbler_api_url() -> str:
"""
Get the URL of the Cobbler HTTP API from the Cobbler settings file.
:return: The api entry point. This does not respect modifications from Loadbalancers or API-Gateways.
"""
# Load server and http port
# TODO: Replace with Settings access
data = settings.read_settings_file()
ip_address = data.get("server", "127.0.0.1")
if data.get("client_use_localhost", False):
# this overrides the server setting
ip_address = "127.0.0.1"
port = data.get("http_port", "80")
protocol = "http"
if data.get("client_use_https", False):
protocol = "https"
return f"{protocol}://{ip_address}:{port}/cobbler_api"
def local_get_cobbler_xmlrpc_url() -> str:
"""
Get the URL of the Cobbler XMLRPC API from the Cobbler settings file.
:return: The api entry point.
"""
# Load xmlrpc port
data = settings.read_settings_file()
return f"http://127.0.0.1:{data.get('xmlrpc_port', '25151')}"
def strip_none(
data: Optional[Union[List[Any], Dict[Any, Any], int, str, float]],
omit_none: bool = False,
) -> Union[List[Any], Dict[Any, Any], int, str, float]:
"""
Remove "None" entries from datastructures. Used prior to communicating with XMLRPC.
:param data: The data to strip None away.
:param omit_none: If the datastructure is not a single item then None items will be skipped instead of replaced if
set to "True".
:return: The modified data structure without any occurrence of None.
"""
if data is None:
data = "~"
elif isinstance(data, list):
data2: List[Any] = []
for element in data:
if omit_none and element is None:
pass
else:
data2.append(strip_none(element))
return data2
elif isinstance(data, dict):
data3: Dict[Any, Any] = {}
for key in list(data.keys()):
if omit_none and data[key] is None:
pass
else:
data3[str(key)] = strip_none(data[key])
return data3
return data
def revert_strip_none(
data: Union[str, int, float, bool, List[Any], Dict[Any, Any]]
) -> Optional[Union[str, int, float, bool, List[Any], Dict[Any, Any]]]:
"""
Does the opposite to strip_none. If a value which represents None is detected, it replaces it with None.
:param data: The data to check.
:return: The data without None.
"""
if isinstance(data, str) and data.strip() == "~":
return None
if isinstance(data, list):
data2: List[Any] = []
for element in data:
data2.append(revert_strip_none(element))
return data2
if isinstance(data, dict):
data3: Dict[Any, Any] = {}
for key in data.keys():
data3[key] = revert_strip_none(data[key])
return data3
return data
def lod_to_dod(_list: List[Any], indexkey: Hashable) -> Dict[Any, Any]:
r"""
Things like ``get_distros()`` returns a list of a dictionaries. Convert this to a dict of dicts keyed off of an
arbitrary field.
Example: ``[ { "a" : 2 }, { "a" : 3 } ]`` -> ``{ "2" : { "a" : 2 }, "3" : { "a" : "3" } }``
:param _list: The list of dictionaries to use for the conversion.
:param indexkey: The position to use as dictionary keys.
:return: The converted dictionary. It is not guaranteed that the same key is not used multiple times.
"""
results: Dict[Any, Any] = {}
for item in _list:
results[item[indexkey]] = item
return results
def lod_sort_by_key(list_to_sort: List[Any], indexkey: Hashable) -> List[Any]:
"""
Sorts a list of dictionaries by a given key in the dictionaries.
Note: This is a destructive operation and does not sort the dictionaries.
:param list_to_sort: The list of dictionaries to sort.
:param indexkey: The key to index to dicts in the list.
:return: The sorted list.
"""
return sorted(list_to_sort, key=lambda k: k[indexkey])
def dhcpconf_location(protocol: enums.DHCP, filename: str = "dhcpd.conf") -> str:
"""
This method returns the location of the dhcpd.conf file.
:param protocol: The DHCP protocol version (v4/v6) that is used.
:param filename: The filename of the DHCP configuration file.
:raises AttributeError: If the protocol is not v4/v6.
:return: The path possibly used for the dhcpd.conf file.
"""
if protocol not in enums.DHCP:
logger.info(
"DHCP configuration location could not be determined due to unknown protocol version."
)
raise AttributeError("DHCP must be version 4 or 6!")
if protocol == enums.DHCP.V6 and filename == "dhcpd.conf":
filename = "dhcpd6.conf"
(dist, version) = os_release()
if (
(dist in ("redhat", "centos") and version < 6)
or (dist == "fedora" and version < 11)
or (dist == "suse")
):
return os.path.join("/etc", filename)
if (dist == "debian" and int(version) < 6) or (
dist == "ubuntu" and version < 11.10
):
return os.path.join("/etc/dhcp3", filename)
return os.path.join("/etc/dhcp/", filename)
def namedconf_location() -> str:
"""
This returns the location of the named.conf file.
:return: If the distro is Debian/Ubuntu then this returns "/etc/bind/named.conf". Otherwise "/etc/named.conf"
"""
(dist, _) = os_release()
if dist in ("debian", "ubuntu"):
return "/etc/bind/named.conf"
return "/etc/named.conf"
def dhcp_service_name() -> str:
"""
Determine the dhcp service which is different on various distros. This is currently a hardcoded detection.
:return: This will return one of the following names: "dhcp3-server", "isc-dhcp-server", "dhcpd"
"""
(dist, version) = os_release()
if dist == "debian" and int(version) < 6:
return "dhcp3-server"
if dist == "debian" and int(version) >= 6:
return "isc-dhcp-server"
if dist == "ubuntu" and version < 11.10:
return "dhcp3-server"
if dist == "ubuntu" and version >= 11.10:
return "isc-dhcp-server"
return "dhcpd"
def named_service_name() -> str:
"""
Determine the named service which is normally different on various distros.
:return: This will return for debian/ubuntu bind9 and on other distros named-chroot or named.
"""
(dist, _) = os_release()
if dist in ("debian", "ubuntu"):
return "bind9"
if process_management.is_systemd():
return_code = subprocess_call(
["/usr/bin/systemctl", "is-active", "named-chroot"], shell=False
)
if return_code == 0:
return "named-chroot"
return "named"
def compare_versions_gt(ver1: str, ver2: str) -> bool:
"""
Compares versions like "0.9.3" with each other and decides if ver1 is greater than ver2.
:param ver1: The first version.
:param ver2: The second version.
:return: True if ver1 is greater, otherwise False.
"""
def versiontuple(version: str) -> Tuple[int, ...]:
return tuple(map(int, (version.split("."))))
return versiontuple(ver1) > versiontuple(ver2)
def kopts_overwrite(
kopts: Dict[Any, Any],
cobbler_server_hostname: str = "",
distro_breed: str = "",
system_name: str = "",
) -> None:
"""
SUSE is not using 'text'. Instead 'textmode' is used as kernel option.
:param kopts: The kopts of the system.
:param cobbler_server_hostname: The server setting from our Settings.
:param distro_breed: The distro for the system to change to kopts for.
:param system_name: The system to overwrite the kopts for.
"""
# Type checks
if not isinstance(kopts, dict): # type: ignore
raise TypeError("kopts needs to be of type dict")
if not isinstance(cobbler_server_hostname, str): # type: ignore
raise TypeError("cobbler_server_hostname needs to be of type str")
if not isinstance(distro_breed, str): # type: ignore
raise TypeError("distro_breed needs to be of type str")
if not isinstance(system_name, str): # type: ignore
raise TypeError("system_name needs to be of type str")
# Function logic
if distro_breed == "suse":
if "textmode" in list(kopts.keys()):
kopts.pop("text", None)
elif "text" in list(kopts.keys()):
kopts.pop("text", None)
kopts["textmode"] = ["1"]
if system_name and cobbler_server_hostname:
# only works if pxe_just_once is enabled in global settings
kopts[
"info"
] = f"http://{cobbler_server_hostname}/cblr/svc/op/nopxe/system/{system_name}"
def is_str_int(value: str) -> bool:
"""
Checks if the string value could be converted into an integer.
This is necessary since the CLI only works with strings but many methods and checks expects an integer.
:param value: The value to check
:return: True if conversion is successful
"""
if not isinstance(value, str): # type: ignore
raise TypeError("value needs to be of type string")
try:
converted = int(value)
return str(converted) == value
except ValueError:
pass
return False
def is_str_float(value: str) -> bool:
"""
Checks if the string value could be converted into a float.
This is necessary since the CLI only works with strings but many methods and checks expects a float.
:param value: The value to check
:return: True if conversion is successful
"""
if not isinstance(value, str): # type: ignore
raise TypeError("value needs to be of type string")
if is_str_int(value):
return True
try:
converted = float(value)
return str(converted) == value
except ValueError:
pass
return False
@contextlib.contextmanager
def filelock(lock_file: str):
"""
Context manager to acquire a file lock and release it afterwards
:param lock_file: Path to the file lock to acquire
:raises OSError: Raised in case of unexpect error acquiring file lock.
"""
fd = None
try:
fd = os.open(lock_file, os.O_RDWR | os.O_CREAT | os.O_TRUNC, 0o660)
fcntl.flock(fd, fcntl.LOCK_EX)
yield
finally:
if fd:
with contextlib.suppress(OSError):
fcntl.flock(fd, fcntl.LOCK_UN)
os.close(fd)
def merge_dicts_recursive(
base_dict: Dict[str, Any], updating_dict: Dict[str, Any], str_append: bool = False
) -> Dict[str, Any]:
"""Merge updating_dict into base_config recursively.
:param base_dict: Base dictionary.
:param updating_dict: Updating dict, overrides base_dict.
:param str_append: Append the updating_dict String values to the base_dict
with the same key instead of overwriting the original value.
:returns dict: Merged dict"""
ret = base_dict.copy()
for k, v in updating_dict.items():
if (
k in base_dict
and isinstance(v, dict)
and isinstance(base_dict.get(k), dict)
):
ret[k] = merge_dicts_recursive(base_dict[k], v, str_append) # type: ignore
elif str_append and k in ret and isinstance(v, str):
ret[k] += v
else:
ret[k] = v
return ret
def create_files_if_not_existing(files: List[str]) -> None:
"""
Create empty files if they don't already exist. If they exist, do nothing.
:param files: A list of the files to create. If the base directory doesn't
exist, create it too.
raises OSError: In case the file cannot be created.
"""
for file in files:
if not file:
logger.warning("Attempted to create an empty file in %s, skipping", files)
continue
if not os.path.exists(file):
# It's possible the file dir doesn't exist yet, create it
basedir = os.path.dirname(file)
if not os.path.exists(basedir):
os.makedirs(basedir)
# Create empty file
open( # pylint: disable=consider-using-with
file, "a", encoding="UTF-8"
).close()
def remove_lines_in_file(filepath: str, remove_keywords: List[str]) -> None:
"""
Remove any line from file at a given filepath if it is a substring of any
element of the `remove_keywords` list.
:param filepath: A path of the file that you want to modify.
:param remove_keywords: A list of keywords. Any line in filepath that contains
any of the keywords will be removed.
raises OSError: In case the file cannot be read or modified.
"""
tmp_filepath = filepath + ".tmp"
with open(filepath, "r", encoding="UTF-8") as fh, open(
tmp_filepath, "w", encoding="UTF-8"
) as tmp_fh:
for line in fh:
if any(keyword in line for keyword in remove_keywords):
continue
tmp_fh.write(line)
os.replace(tmp_filepath, filepath)
| 49,140
|
Python
|
.py
| 1,180
| 34.379661
| 119
| 0.638034
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,123
|
process_management.py
|
cobbler_cobbler/cobbler/utils/process_management.py
|
"""
TODO
"""
import logging
import os
from xmlrpc.client import Fault, ServerProxy
from cobbler import utils
logger = logging.getLogger()
def is_systemd() -> bool:
"""
Return whether this system uses systemd.
This method currently checks if the path ``/usr/lib/systemd/systemd`` exists.
"""
return os.path.exists("/usr/lib/systemd/systemd")
def is_supervisord() -> bool:
"""
Return whether this system uses supervisord.
This method currently checks if there is a running supervisord instance on ``localhost``.
"""
with ServerProxy("http://localhost:9001/RPC2") as server:
try:
server.supervisor.getState()
except OSError:
return False
return True
def is_service() -> bool:
"""
Return whether this system uses service.
This method currently checks if the path ``/usr/sbin/service`` exists.
"""
return os.path.exists("/usr/sbin/service")
def service_restart(service_name: str) -> int:
"""
Restarts a daemon service independent of the underlining process manager. Currently, supervisord, systemd and SysV
are supported. Checks which manager is present is done in the order just described.
:param service_name: The name of the service
:returns: If the system is SystemD or SysV based the return code of the restart command.
"""
if is_supervisord():
with ServerProxy("http://localhost:9001/RPC2") as server:
process_state = (
-1
) # Not redundant because we could run otherwise in an UnboundLocalError
try:
process_info = server.supervisor.getProcessInfo(service_name)
if not isinstance(process_info, dict):
raise ValueError("TODO")
process_state = process_info.get("state", -1)
if process_state in (10, 20):
server.supervisor.stopProcess(service_name)
if server.supervisor.startProcess(service_name): # returns a boolean
return 0
logger.error('Restarting service "%s" failed', service_name)
return 1
except Fault as client_fault:
logger.error(
'Restarting service "%s" failed (supervisord process state was "%s")',
service_name,
process_state,
exc_info=client_fault,
)
return 1
elif is_systemd():
restart_command = ["systemctl", "restart", service_name]
elif is_service():
restart_command = ["service", service_name, "restart"]
else:
logger.warning(
'We could not restart service "%s" due to an unsupported process manager!',
service_name,
)
return 1
ret = utils.subprocess_call(restart_command, shell=False)
if ret != 0:
logger.error('Restarting service "%s" failed', service_name)
return ret
| 3,012
|
Python
|
.py
| 76
| 30.473684
| 118
| 0.619863
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,124
|
mtab.py
|
cobbler_cobbler/cobbler/utils/mtab.py
|
"""
We cache the contents of ``/etc/mtab``. The following module is used to keep our cache in sync.
"""
import os
from typing import Any, Dict, List, Optional, Tuple
MTAB_MTIME = None
MTAB_MAP = []
class MntEntObj:
"""
TODO
"""
mnt_fsname = None # name of mounted file system
mnt_dir = None # file system path prefix
mnt_type = None # mount type (see mntent.h)
mnt_opts = None # mount options (see mntent.h)
mnt_freq = 0 # dump frequency in days
mnt_passno = 0 # pass number on parallel fsck
def __init__(self, input_data: Optional[str] = None):
"""
This is an object which contains information about a mounted filesystem.
:param input_data: This is a string which is separated internally by whitespace. If present it represents the
arguments: "mnt_fsname", "mnt_dir", "mnt_type", "mnt_opts", "mnt_freq" and "mnt_passno". The order
must be preserved, as well as the separation by whitespace.
"""
if input_data and isinstance(input_data, str): # type: ignore
split_data = input_data.split()
self.mnt_fsname = split_data[0]
self.mnt_dir = split_data[1]
self.mnt_type = split_data[2]
self.mnt_opts = split_data[3]
self.mnt_freq = int(split_data[4])
self.mnt_passno = int(split_data[5])
def __dict__(self) -> Dict[str, Any]: # type: ignore
"""
This maps all variables available in this class to a dictionary. The name of the keys is identical to the names
of the variables.
:return: The dictionary representation of an instance of this class.
"""
# See https://github.com/python/mypy/issues/6523 why this is ignored
return {
"mnt_fsname": self.mnt_fsname,
"mnt_dir": self.mnt_dir,
"mnt_type": self.mnt_type,
"mnt_opts": self.mnt_opts,
"mnt_freq": self.mnt_freq,
"mnt_passno": self.mnt_passno,
}
def __str__(self) -> str:
"""
This is the object representation of a mounted filesystem as a string. It can be fed to the constructor of this
class.
:return: The space separated list of values of this object.
"""
return f"{self.mnt_fsname} {self.mnt_dir} {self.mnt_type} {self.mnt_opts} {self.mnt_freq} {self.mnt_passno}"
def get_mtab(mtab: str = "/etc/mtab", vfstype: bool = False) -> List[MntEntObj]:
"""
Get the list of mtab entries. If a custom mtab should be read then the location can be overridden via a parameter.
:param mtab: The location of the mtab. Argument can be omitted if the mtab is at its default location.
:param vfstype: If this is True, then all filesystems which are nfs are returned. Otherwise this returns all mtab
entries.
:return: The list of requested mtab entries.
"""
# These two variables are required to be caches on the module level to be persistent during runtime.
global MTAB_MTIME, MTAB_MAP # pylint: disable=global-statement
mtab_stat = os.stat(mtab)
if mtab_stat.st_mtime != MTAB_MTIME: # type: ignore
# cache is stale ... refresh
MTAB_MTIME = mtab_stat.st_mtime # type: ignore
MTAB_MAP = __cache_mtab__(mtab) # type: ignore
# was a specific fstype requested?
if vfstype:
mtab_type_map: List[MntEntObj] = []
for ent in MTAB_MAP:
if ent.mnt_type == "nfs":
mtab_type_map.append(ent)
return mtab_type_map
return MTAB_MAP
def __cache_mtab__(mtab: str = "/etc/mtab") -> List[MntEntObj]:
"""
Open the mtab and cache it inside Cobbler. If it is guessed that the mtab hasn't changed the cache data is used.
:param mtab: The location of the mtab. Argument can be ommited if the mtab is at its default location.
:return: The mtab content stripped from empty lines (if any are present).
"""
with open(mtab, encoding="UTF-8") as mtab_fd:
result = [
MntEntObj(line) for line in mtab_fd.read().split("\n") if len(line) > 0
]
return result
def get_file_device_path(fname: str) -> Tuple[Optional[str], str]:
"""
What this function attempts to do is take a file and return:
- the device the file is on
- the path of the file relative to the device.
For example:
/boot/vmlinuz -> (/dev/sda3, /vmlinuz)
/boot/efi/efi/redhat/elilo.conf -> (/dev/cciss0, /elilo.conf)
/etc/fstab -> (/dev/sda4, /etc/fstab)
:param fname: The filename to split up.
:return: A tuple containing the device and relative filename.
"""
# resolve any symlinks
fname = os.path.realpath(fname)
# convert mtab to a dict
mtab_dict: Dict[str, str] = {}
try:
for ent in get_mtab():
mtab_dict[ent.mnt_dir] = ent.mnt_fsname # type: ignore
except Exception:
pass
# find a best match
fdir = os.path.dirname(fname)
match = fdir in mtab_dict
chrootfs = False
while not match:
if fdir == os.path.sep:
chrootfs = True
break
fdir = os.path.realpath(os.path.join(fdir, os.path.pardir))
match = fdir in mtab_dict
# construct file path relative to device
if fdir != os.path.sep:
fname = fname[len(fdir) :]
if chrootfs:
return ":", fname
return mtab_dict[fdir], fname
def is_remote_file(file: str) -> bool:
"""
This function is trying to detect if the file in the argument is remote or not.
:param file: The filepath to check.
:return: If remote True, otherwise False.
"""
(dev, _) = get_file_device_path(file)
if dev is None:
return False
if dev.find(":") != -1:
return True
return False
| 5,871
|
Python
|
.py
| 137
| 35.233577
| 120
| 0.625395
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,125
|
filesystem_helpers.py
|
cobbler_cobbler/cobbler/utils/filesystem_helpers.py
|
"""
TODO
"""
import errno
import glob
import hashlib
import json
import logging
import os
import pathlib
import shutil
import subprocess
import urllib.request
from typing import TYPE_CHECKING, Dict, Optional, Tuple, Union
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.utils import log_exc, mtab
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
logger = logging.getLogger()
def is_safe_to_hardlink(src: str, dst: str, api: "CobblerAPI") -> bool:
"""
Determine if it is safe to hardlink a file to the destination path.
:param src: The hardlink source path.
:param dst: The hardlink target path.
:param api: The api-instance to resolve needed information with.
:return: True if selinux is disabled, the file is on the same device, the source in not a link, and it is not a
remote path. If selinux is enabled the functions still may return true if the object is a kernel or initrd.
Otherwise returns False.
"""
# FIXME: Calling this with emtpy strings returns True?!
(dev1, path1) = mtab.get_file_device_path(src)
(dev2, _) = mtab.get_file_device_path(dst)
if dev1 != dev2:
return False
# Do not hardlink to a symbolic link! Chances are high the new link will be dangling.
if pathlib.Path(src).is_symlink():
return False
if dev1 is None:
return False
if dev1.find(":") != -1:
# Is a remote file
return False
# Note: This is very Cobbler implementation specific!
if not api.is_selinux_enabled():
return True
path1_basename = str(pathlib.PurePath(path1).name)
if utils.re_initrd.match(path1_basename):
return True
if utils.re_kernel.match(path1_basename):
return True
# We're dealing with SELinux and files that are not safe to chown
return False
def sha1_file(file_path: Union[str, pathlib.Path], buffer_size: int = 65536) -> str:
"""
This function is emulating the functionality of the sha1sum tool.
:param file_path: The path to the file that should be hashed.
:param buffer_size: The buffer-size that should be used to hash the file.
:return: The SHA1 hash as sha1sum would return it.
"""
# Highly inspired by: https://stackoverflow.com/a/22058673
sha1 = hashlib.sha1()
with open(file_path, "rb") as file_fd:
while True:
data = file_fd.read(buffer_size)
if not data:
break
sha1.update(data)
return sha1.hexdigest()
def hashfile(file_name: str, lcache: Optional[pathlib.Path] = None) -> Optional[str]:
r"""
Returns the sha1sum of the file
:param file_name: The file to get the sha1sum of.
:param lcache: This is a directory where Cobbler would store its ``link_cache.json`` file to speed up the return
of the hash. The hash looked up would be checked against the Cobbler internal mtime of the object.
:return: The sha1 sum or None if the file doesn't exist.
"""
hashfile_db: Dict[str, Tuple[float, str]] = {}
if lcache is None:
return None
dbfile = pathlib.Path(lcache) / "link_cache.json"
if dbfile.exists():
hashfile_db = json.loads(dbfile.read_text(encoding="utf-8"))
file = pathlib.Path(file_name)
if file.exists():
mtime = file.stat().st_mtime
if file_name in hashfile_db:
if hashfile_db[file_name][0] >= mtime:
return hashfile_db[file_name][1]
key = sha1_file(file_name)
hashfile_db[file_name] = (mtime, key)
__create_if_not_exists(lcache)
dbfile.write_text(json.dumps(hashfile_db), encoding="utf-8")
return key
return None
def cachefile(src: str, dst: str) -> None:
"""
Copy a file into a cache and link it into place. Use this with caution, otherwise you could end up copying data
twice if the cache is not on the same device as the destination.
:param src: The sourcefile for the copy action.
:param dst: The destination for the copy action.
"""
lcache = pathlib.Path(dst).parent.parent / ".link_cache"
if not lcache.is_dir():
lcache.mkdir()
key = hashfile(src, lcache=lcache)
if key is None:
logger.info("Cachefile skipped due to missing key for it!")
return None
cachefile_obj = lcache / key
if not cachefile_obj.exists():
logger.info("trying to create cache file %s", cachefile_obj)
copyfile(src, str(cachefile_obj))
logger.debug("trying cachelink %s -> %s -> %s", src, cachefile_obj, dst)
os.link(cachefile_obj, dst)
def linkfile(
api: "CobblerAPI", src: str, dst: str, symlink_ok: bool = False, cache: bool = True
) -> None:
"""
Attempt to create a link dst that points to src. Because file systems suck we attempt several different methods or
bail to just copying the file.
:param api: This parameter is needed to check if a file can be hardlinked. This method fails if this parameter is
not present.
:param src: The source file.
:param dst: The destination for the link.
:param symlink_ok: If it is okay to just use a symbolic link.
:param cache: If it is okay to use a cached file instead of the real one.
:raises CX: Raised in case the API is not given.
"""
dst_obj = pathlib.Path(dst)
src_obj = pathlib.Path(src)
if dst_obj.exists():
# if the destination exists, is it right in terms of accuracy and context?
if src_obj.samefile(dst):
if not is_safe_to_hardlink(src, dst, api):
# may have to remove old hardlinks for SELinux reasons as previous implementations were not complete
logger.info("removing: %s", dst)
dst_obj.unlink()
else:
return None
elif dst_obj.is_symlink():
# existing path exists and is a symlink, update the symlink
logger.info("removing: %s", dst)
dst_obj.unlink()
if is_safe_to_hardlink(src, dst, api):
# we can try a hardlink if the destination isn't to NFS or Samba this will help save space and sync time.
try:
logger.info("trying hardlink %s -> %s", src, dst)
os.link(src, dst)
return None
except (IOError, OSError):
# hardlink across devices, or link already exists we'll just symlink it if we can or otherwise copy it
pass
if symlink_ok:
# we can symlink anywhere except for /tftpboot because that is run chroot, so if we can symlink now, try it.
try:
logger.info("trying symlink %s -> %s", src, dst)
dst_obj.symlink_to(src_obj)
return None
except (IOError, OSError):
pass
if cache:
try:
cachefile(src, dst)
return None
except (IOError, OSError):
pass
# we couldn't hardlink and we couldn't symlink so we must copy
copyfile(src, dst)
def copyfile(src: str, dst: str, symlink: bool = False) -> None:
"""
Copy a file from source to the destination.
:param src: The source file. This may also be a folder.
:param dst: The destination for the file or folder.
:param symlink: If instead of a copy, a symlink is okay, then this may be set explicitly to "True".
:raises OSError: Raised in case ``src`` could not be read.
"""
src_obj = pathlib.Path(src)
dst_obj = pathlib.Path(dst)
try:
logger.info("copying: %s -> %s", src, dst)
if src_obj.is_dir():
shutil.copytree(src, dst, symlinks=symlink)
else:
shutil.copyfile(src, dst, follow_symlinks=symlink)
except Exception as error:
if not os.access(src, os.R_OK):
raise OSError(f"Cannot read: {src}") from error
if src_obj.samefile(dst_obj):
# accomodate for the possibility that we already copied
# the file as a symlink/hardlink
raise
# traceback.print_exc()
# raise CX("Error copying %(src)s to %(dst)s" % { "src" : src, "dst" : dst})
def copyremotefile(src: str, dst1: str, api: Optional["CobblerAPI"] = None) -> None:
"""
Copys a file from a remote place to the local destionation.
:param src: The remote file URI.
:param dst1: The copy destination on the local filesystem.
:param api: This parameter is not used currently.
:raises OSError: Raised in case an error occurs when fetching or writing the file.
"""
try:
logger.info("copying: %s -> %s", src, dst1)
with urllib.request.urlopen(src) as srcfile:
with open(dst1, "wb") as output:
output.write(srcfile.read())
except Exception as error:
raise OSError(
f"Error while getting remote file ({src} -> {dst1}):\n{error}"
) from error
def copyfileimage(src: str, image_location: str, dst: str) -> None:
"""
Copy a file from source to the destination in the image.
:param src: The source file.
:param image_location: The location of the image.
:param dst: The destination for the file.
"""
cmd = ["mcopy", "-n", "-i", image_location, src, "::/" + dst]
try:
logger.info('running: "%s"', cmd)
utils.subprocess_call(cmd, shell=False)
except subprocess.CalledProcessError as error:
raise OSError(
f"Error while copying file to image ({src} -> {dst}):\n{error.output}"
) from error
def rmfile(path: str) -> None:
"""
Delete a single file.
:param path: The file to delete.
"""
try:
pathlib.Path(path).unlink()
logger.info('Successfully removed "%s"', path)
except FileNotFoundError:
pass
except OSError as ioe:
logger.warning('Could not remove file "%s": %s', path, ioe.strerror)
def rmtree_contents(path: str) -> None:
"""
Delete the content of a folder with a glob pattern.
:param path: This parameter presents the glob pattern of what should be deleted.
"""
what_to_delete = glob.glob(f"{path}/*")
for rmtree_path in what_to_delete:
rmtree(rmtree_path)
def rmtree(path: str) -> None:
"""
Delete a complete directory or just a single file.
:param path: The directory or folder to delete.
:raises CX: Raised in case ``path`` does not exist.
"""
try:
if pathlib.Path(path).is_file():
rmfile(path)
logger.info("removing: %s", path)
shutil.rmtree(path, ignore_errors=True)
except OSError as ioe:
log_exc()
if ioe.errno != errno.ENOENT: # doesn't exist
raise CX(f"Error deleting {path}") from ioe
def rmglob_files(path: str, glob_pattern: str) -> None:
"""
Deletes all files in ``path`` with ``glob_pattern`` with the help of ``rmfile()``.
:param path: The folder of the files to remove.
:param glob_pattern: The glob pattern for the files to remove in ``path``.
"""
for rm_path in pathlib.Path(path).glob(glob_pattern):
rmfile(str(rm_path))
def mkdir(path: str, mode: int = 0o755) -> None:
"""
Create directory with a given mode.
:param path: The path to create the directory at.
:param mode: The mode to create the directory with.
:raises CX: Raised in case creating the directory fails with something different from error code 17 (directory
already exists).
"""
try:
pathlib.Path(path).mkdir(mode=mode, parents=True)
except OSError as os_error:
# already exists (no constant for 17?)
if os_error.errno != 17:
log_exc()
raise CX(f"Error creating {path}") from os_error
def mkdirimage(path: pathlib.Path, image_location: str) -> None:
"""
Create a directory in an image.
:param path: The path to create the directory at.
:param image_location: The location of the image.
"""
path_parts = path.parts
cmd = ["mmd", "-i", image_location, str(path)]
try:
# Create all parent directories one by one inside the image
for parent_directory in range(1, len(path_parts) + 1):
cmd[-1] = "/".join(path_parts[:parent_directory])
logger.info('running: "%s"', cmd)
utils.subprocess_call(cmd, shell=False)
except subprocess.CalledProcessError as error:
raise OSError(
f"Error while creating directory ({cmd[-1]}) in image {image_location}.\n{error.output}"
) from error
def path_tail(apath: str, bpath: str) -> str:
"""
Given two paths (B is longer than A), find the part in B not in A
:param apath: The first path.
:param bpath: The second path.
:return: If the paths are not starting at the same location this function returns an empty string.
"""
position = bpath.find(apath)
if position != 0:
return ""
rposition = position + len(apath)
result: str = bpath[rposition:]
if not result.startswith("/"):
result = "/" + result
return result
def safe_filter(var: Optional[str]) -> None:
r"""
This function does nothing if the argument does not find any semicolons or two points behind each other.
:param var: This parameter shall not be None or have ".."/";" at the end.
:raises CX: In case any ``..`` or ``/`` is found in ``var``.
"""
if var is None:
return None
if var.find("..") != -1 or var.find(";") != -1:
raise CX("Invalid characters found in input")
def __create_if_not_exists(path: pathlib.Path) -> None:
"""
Creates a directory if it has not already been created.
:param path: The path where the directory should be created. Parents directories must exist.
"""
if not path.exists():
mkdir(str(path))
def __symlink_if_not_exists(source: pathlib.Path, target: pathlib.Path) -> None:
"""
Symlinks a directory if the symlink doesn't exist.
:param source: The source directory
:param target: The target directory
"""
if not target.exists():
target.symlink_to(source)
def create_web_dirs(api: "CobblerAPI") -> None:
"""
Create directories for HTTP content
:param api: CobblerAPI
"""
webdir_obj = pathlib.Path(api.settings().webdir)
webroot_distro_mirror = webdir_obj / "distro_mirror"
webroot_misc = webdir_obj / "misc"
webroot_directory_paths = [
webdir_obj / "localmirror",
webdir_obj / "repo_mirror",
webroot_distro_mirror,
webroot_distro_mirror / "config",
webdir_obj / "links",
webroot_misc,
webdir_obj / "pub",
webdir_obj / "rendered",
webdir_obj / "images",
]
for directory_path in webroot_directory_paths:
__create_if_not_exists(directory_path)
# Copy anamon scripts to the webroot
misc_path = pathlib.Path("/var/lib/cobbler/misc")
rmtree_contents(str(webroot_misc))
for file in [f for f in misc_path.iterdir() if (misc_path / f).is_file()]:
copyfile(str((misc_path / file)), str(webroot_misc))
def create_tftpboot_dirs(api: "CobblerAPI") -> None:
"""
Create directories for tftpboot images
:param api: CobblerAPI
"""
bootloc = pathlib.Path(api.settings().tftpboot_location)
grub_dir = bootloc / "grub"
esxi_dir = bootloc / "esxi"
tftpboot_directory_paths = [
bootloc / "boot",
bootloc / "etc",
bootloc / "images2",
bootloc / "ppc",
bootloc / "s390x",
bootloc / "pxelinux.cfg",
grub_dir,
grub_dir / "system",
grub_dir / "system_link",
bootloc / "images",
bootloc / "ipxe",
esxi_dir,
esxi_dir / "system",
]
for directory_path in tftpboot_directory_paths:
__create_if_not_exists(directory_path)
grub_images_link = grub_dir / "images"
__symlink_if_not_exists(pathlib.Path("../images"), grub_images_link)
esxi_images_link = esxi_dir / "images"
__symlink_if_not_exists(pathlib.Path("../images"), esxi_images_link)
esxi_pxelinux_link = esxi_dir / "pxelinux.cfg"
__symlink_if_not_exists(pathlib.Path("../pxelinux.cfg"), esxi_pxelinux_link)
def create_trigger_dirs(api: "CobblerAPI") -> None:
"""
Creates the directories that the user/admin can fill with dynamically executed scripts.
:param api: CobblerAPI
"""
# This is not yet a setting
libpath = pathlib.Path("/var/lib/cobbler")
trigger_directory = libpath / "triggers"
trigger_directories = [
trigger_directory,
trigger_directory / "add",
trigger_directory / "add" / "distro",
trigger_directory / "add" / "distro" / "pre",
trigger_directory / "add" / "distro" / "post",
trigger_directory / "add" / "profile",
trigger_directory / "add" / "profile" / "pre",
trigger_directory / "add" / "profile" / "post",
trigger_directory / "add" / "system",
trigger_directory / "add" / "system" / "pre",
trigger_directory / "add" / "system" / "post",
trigger_directory / "add" / "repo",
trigger_directory / "add" / "repo" / "pre",
trigger_directory / "add" / "repo" / "post",
trigger_directory / "add" / "menu",
trigger_directory / "add" / "menu" / "pre",
trigger_directory / "add" / "menu" / "post",
trigger_directory / "delete",
trigger_directory / "delete" / "distro",
trigger_directory / "delete" / "distro" / "pre",
trigger_directory / "delete" / "distro" / "post",
trigger_directory / "delete" / "profile",
trigger_directory / "delete" / "profile" / "pre",
trigger_directory / "delete" / "profile" / "post",
trigger_directory / "delete" / "system",
trigger_directory / "delete" / "system" / "pre",
trigger_directory / "delete" / "system" / "post",
trigger_directory / "delete" / "repo",
trigger_directory / "delete" / "repo" / "pre",
trigger_directory / "delete" / "repo" / "post",
trigger_directory / "delete" / "menu",
trigger_directory / "delete" / "menu" / "pre",
trigger_directory / "delete" / "menu" / "post",
trigger_directory / "install",
trigger_directory / "install" / "pre",
trigger_directory / "install" / "post",
trigger_directory / "install" / "firstboot",
trigger_directory / "sync",
trigger_directory / "sync" / "pre",
trigger_directory / "sync" / "post",
trigger_directory / "change",
trigger_directory / "task",
trigger_directory / "task" / "distro",
trigger_directory / "task" / "distro" / "pre",
trigger_directory / "task" / "distro" / "post",
trigger_directory / "task" / "profile",
trigger_directory / "task" / "profile" / "pre",
trigger_directory / "task" / "profile" / "post",
trigger_directory / "task" / "system",
trigger_directory / "task" / "system" / "pre",
trigger_directory / "task" / "system" / "post",
trigger_directory / "task" / "repo",
trigger_directory / "task" / "repo" / "pre",
trigger_directory / "task" / "repo" / "post",
trigger_directory / "task" / "menu",
trigger_directory / "task" / "menu" / "pre",
trigger_directory / "task" / "menu" / "post",
]
for directory_path in trigger_directories:
__create_if_not_exists(directory_path)
def create_json_database_dirs(api: "CobblerAPI") -> None:
"""
Creates the database directories for the file serializer
:param api: CobblerAPI
"""
# This is not yet a setting
libpath = pathlib.Path("/var/lib/cobbler")
database_directories = [
libpath / "collections",
libpath / "collections" / "distros",
libpath / "collections" / "images",
libpath / "collections" / "profiles",
libpath / "collections" / "repos",
libpath / "collections" / "systems",
libpath / "collections" / "menus",
]
for directory_path in database_directories:
__create_if_not_exists(directory_path)
| 20,207
|
Python
|
.py
| 486
| 34.465021
| 120
| 0.629437
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,126
|
__init__.py
|
cobbler_cobbler/cobbler/settings/__init__.py
|
"""
Cobbler app-wide settings
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2008, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: 2022 Pablo Suárez Hernández <psuarezhernandez@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
import datetime
import logging
import os.path
import pathlib
import shutil
import traceback
from pathlib import Path
from typing import Any, Dict, List, Optional
import yaml
from schema import ( # type: ignore
SchemaError,
SchemaMissingKeyError,
SchemaWrongKeyError,
)
from cobbler.settings import migrations
from cobbler.utils import input_converters
class Settings:
"""
This class contains all app-wide settings of Cobbler. It should only exist once in a Cobbler instance.
"""
@staticmethod
def collection_type() -> str:
"""
This is a hardcoded string which represents the collection type.
:return: "setting"
"""
return "setting"
@staticmethod
def collection_types() -> str:
"""
return the collection plural name
"""
return "settings"
def __init__(self) -> None:
"""
Constructor.
"""
self.auto_migrate_settings = False
self.autoinstall_scheme = "http"
self.allow_duplicate_hostnames = False
self.allow_duplicate_ips = False
self.allow_duplicate_macs = False
self.allow_dynamic_settings = False
self.always_write_dhcp_entries = False
self.anamon_enabled = False
self.auth_token_expiration = 3600
self.authn_pam_service = "login"
self.autoinstall_snippets_dir = "/var/lib/cobbler/snippets"
self.autoinstall_templates_dir = "/var/lib/cobbler/templates"
self.bind_chroot_path = ""
self.bind_zonefile_path = "/var/lib/named"
self.bind_master = "127.0.0.1"
self.boot_loader_conf_template_dir = "/etc/cobbler/boot_loader_conf"
self.bootloaders_dir = "/var/lib/cobbler/loaders"
self.bootloaders_shim_folder = "/usr/share/efi/*/"
self.bootloaders_shim_file = r"shim\.efi$"
self.secure_boot_grub_folder = "/usr/share/efi/*/"
self.secure_boot_grub_file = r"grub\.efi$"
self.bootloaders_ipxe_folder = "/usr/share/ipxe/"
self.bootloaders_formats = {
"aarch64": {"binary_name": "grubaa64.efi"},
"arm": {"binary_name": "bootarm.efi"},
"arm64-efi": {
"binary_name": "grubaa64.efi",
"extra_modules": ["efinet"],
},
"i386-efi": {"binary_name": "bootia32.efi"},
"i386-pc-pxe": {
"binary_name": "grub.0",
"mod_dir": "i386-pc",
"extra_modules": ["chain", "pxe", "biosdisk"],
},
"i686": {"binary_name": "bootia32.efi"},
"IA64": {"binary_name": "bootia64.efi"},
"powerpc-ieee1275": {
"binary_name": "grub.ppc64le",
"extra_modules": ["net", "ofnet"],
},
"x86_64-efi": {
"binary_name": "grubx64.efi",
"extra_modules": ["chain", "efinet"],
},
}
self.bootloaders_modules = [
"btrfs",
"ext2",
"xfs",
"jfs",
"reiserfs",
"all_video",
"boot",
"cat",
"configfile",
"echo",
"fat",
"font",
"gfxmenu",
"gfxterm",
"gzio",
"halt",
"iso9660",
"jpeg",
"linux",
"loadenv",
"minicmd",
"normal",
"part_apple",
"part_gpt",
"part_msdos",
"password_pbkdf2",
"png",
"reboot",
"search",
"search_fs_file",
"search_fs_uuid",
"search_label",
"sleep",
"test",
"true",
"video",
"mdraid09",
"mdraid1x",
"lvm",
"serial",
"regexp",
"tr",
"tftp",
"http",
"luks",
"gcry_rijndael",
"gcry_sha1",
"gcry_sha256",
]
self.grubconfig_dir = "/var/lib/cobbler/grub_config"
self.build_reporting_enabled = False
self.build_reporting_email: List[str] = []
self.build_reporting_ignorelist: List[str] = []
self.build_reporting_sender = ""
self.build_reporting_smtp_server = "localhost"
self.build_reporting_subject = ""
self.buildisodir = "/var/cache/cobbler/buildiso"
self.cheetah_import_whitelist = ["re", "random", "time"]
self.client_use_https = False
self.client_use_localhost = False
self.cobbler_master = ""
self.convert_server_to_ip = False
self.createrepo_flags = "-c cache -s sha"
self.autoinstall = "default.ks"
self.default_name_servers: List[str] = []
self.default_name_servers_search: List[str] = []
self.default_ownership = ["admin"]
self.default_password_crypted = r"\$1\$mF86/UHC\$WvcIcX2t6crBz2onWxyac."
self.default_template_type = "cheetah"
self.default_virt_bridge = "virbr0"
self.default_virt_disk_driver = "raw"
self.default_virt_file_size = 5.0
self.default_virt_ram = 512
self.default_virt_type = "kvm"
self.dnsmasq_ethers_file = "/etc/ethers"
self.dnsmasq_hosts_file = "/var/lib/cobbler/cobbler_hosts"
self.enable_ipxe = False
self.enable_menu = True
self.extra_settings_list: List[str] = []
self.grub2_mod_dir = "/usr/share/grub2/"
self.http_port = 80
self.iso_template_dir = "/etc/cobbler/iso"
self.jinja2_includedir = "/var/lib/cobbler/jinja2"
self.kernel_options: Dict[str, Any] = {}
self.ldap_anonymous_bind = True
self.ldap_base_dn = "DC=devel,DC=redhat,DC=com"
self.ldap_port = 389
self.ldap_search_bind_dn = ""
self.ldap_search_passwd = ""
self.ldap_search_prefix = "uid="
self.ldap_server = "grimlock.devel.redhat.com"
self.ldap_tls = True
self.ldap_tls_cacertdir = ""
self.ldap_tls_cacertfile = ""
self.ldap_tls_certfile = ""
self.ldap_tls_keyfile = ""
self.ldap_tls_reqcert = "hard"
self.ldap_tls_cipher_suite = ""
self.bind_manage_ipmi = False
# TODO: Remove following line
self.manage_dhcp = False
self.manage_dhcp_v6 = False
self.manage_dhcp_v4 = False
self.manage_dns = False
self.manage_forward_zones: List[str] = []
self.manage_reverse_zones: List[str] = []
self.manage_genders = False
self.manage_rsync = False
self.manage_tftpd = True
self.modules = {
"authentication": {
"module": "authentication.configfile",
"hash_algorithm": "sha3_512",
},
"authorization": {"module": "authorization.allowall"},
"dns": {"module": "managers.bind"},
"dhcp": {"module": "managers.isc"},
"tftpd": {"module": "managers.in_tftpd"},
"serializers": {"module": "serializers.file"},
}
self.mongodb = {"host": "localhost", "port": 27017}
self.next_server_v4 = "127.0.0.1"
self.next_server_v6 = "::1"
self.nsupdate_enabled = False
self.nsupdate_log = "/var/log/cobbler/nsupdate.log"
self.nsupdate_tsig_algorithm = "hmac-sha512"
self.nsupdate_tsig_key: List[str] = []
self.power_management_default_type = "ipmilanplus"
self.proxies: List[str] = []
self.proxy_url_ext = ""
self.proxy_url_int = ""
self.puppet_auto_setup = False
self.puppet_parameterized_classes = True
self.puppet_server = "puppet"
self.puppet_version = 2
self.puppetca_path = "/usr/bin/puppet"
self.pxe_just_once = True
self.nopxe_with_triggers = True
self.redhat_management_permissive = False
self.redhat_management_server = "xmlrpc.rhn.redhat.com"
self.redhat_management_key = ""
self.uyuni_authentication_endpoint = ""
self.register_new_installs = False
self.remove_old_puppet_certs_automatically = False
self.replicate_repo_rsync_options = "-avzH"
self.replicate_rsync_options = "-avzH"
self.reposync_flags = "-l -m -d"
self.reposync_rsync_flags = ""
self.restart_dhcp = True
self.restart_dns = True
self.run_install_triggers = True
self.scm_track_enabled = False
self.scm_track_mode = "git"
self.scm_track_author = "cobbler <cobbler@localhost>"
self.scm_push_script = "/bin/true"
self.serializer_pretty_json = False
self.server = "127.0.0.1"
self.sign_puppet_certs_automatically = False
self.signature_path = "/var/lib/cobbler/distro_signatures.json"
self.signature_url = "https://cobbler.github.io/signatures/3.0.x/latest.json"
self.syslinux_dir = "/usr/share/syslinux"
self.syslinux_memdisk_folder = "/usr/share/syslinux"
self.syslinux_pxelinux_folder = "/usr/share/syslinux"
self.tftpboot_location = "/var/lib/tftpboot"
self.virt_auto_boot = True
self.webdir = "/var/www/cobbler"
self.webdir_whitelist = [
".link_cache",
"misc",
"distro_mirror",
"images",
"links",
"localmirror",
"pub",
"rendered",
"repo_mirror",
"repo_profile",
"repo_system",
"svc",
"web",
"webui",
]
self.xmlrpc_port = 25151
self.yum_distro_priority = 1
self.yum_post_install_mirror = True
self.yumdownloader_flags = "--resolve"
self.windows_enabled = False
self.windows_template_dir = "/etc/cobbler/windows"
self.samba_distro_share = "DISTRO"
self.cache_enabled = False
self.lazy_start = False
def to_dict(self, resolved: bool = False) -> Dict[str, Any]:
"""
Return an easily serializable representation of the config.
.. deprecated:: 3.2.1
Use ``obj.__dict__`` directly please. Will be removed with 3.3.0
:param resolved: Present for the compatibility with the Cobbler collections.
:return: The dict with all user settings combined with settings which are left to the default.
"""
# TODO: Deprecate and remove. Tailcall is not needed.
return self.__dict__
def from_dict(self, new_values: Dict[str, Any]) -> Optional["Settings"]:
"""
Modify this object to load values in dictionary. If the handed dict would lead to an invalid object it is
silently discarded.
.. warning:: If the dict from the args has not all settings included Cobbler may behave unexpectedly.
:param new_values: The dictionary with settings to replace.
:return: Returns the settings instance this method was called from.
"""
if new_values is None: # type: ignore[reportUnnecessaryComparison]
logging.warning("Not loading empty settings dictionary!")
return None
old_settings = self.__dict__ # pylint: disable=access-member-before-definition
self.__dict__.update( # pylint: disable=access-member-before-definition
new_values
)
if not self.is_valid():
self.__dict__ = old_settings
raise ValueError(
"New settings would not be valid. Please fix the dict you pass."
)
return self
def is_valid(self) -> bool:
"""
Silently drops all errors and returns ``True`` when everything is valid.
:return: If this settings object is valid this returns true. Otherwise false.
"""
try:
validate_settings(self.__dict__)
except SchemaError:
return False
return True
def __getattr__(self, name: str) -> Any:
"""
This returns the current value of the setting named in the args.
:param name: The setting to return the value of.
:return: The value of the setting "name".
"""
try:
if name == "kernel_options":
# backwards compatibility -- convert possible string value to dict
result = input_converters.input_string_or_dict(
self.__dict__[name], allow_multiples=False
)
self.__dict__[name] = result
return result
# TODO: This needs to be explicitly tested
if name == "manage_dhcp":
return self.manage_dhcp_v4
return self.__dict__[name]
except Exception as error:
if name in self.__dict__:
return self.__dict__[name]
raise AttributeError(
f"no settings attribute named '{name}' found"
) from error
def save(
self,
filepath: str = "/etc/cobbler/settings.yaml",
ignore_keys: Optional[List[str]] = None,
) -> None:
"""
Saves the settings to the disk.
:param filepath: This sets the path of the settingsfile to write.
:param ignore_keys: The list of ignore keys to exclude from migration.
"""
if not ignore_keys:
ignore_keys = []
update_settings_file(self.to_dict(), filepath, ignore_keys)
def validate_settings(
settings_content: Dict[str, Any], ignore_keys: Optional[List[str]] = None
) -> Dict[str, Any]:
"""
This function performs logical validation of our loaded YAML files.
This function will:
- Perform type validation on all values of all keys.
- Provide defaults for optional settings.
:param settings_content: The dictionary content from the YAML file.
:param ignore_keys: The list of ignore keys to exclude from validation.
:raises SchemaError: In case the data given is invalid.
:return: The Settings of Cobbler which can be safely used inside this instance.
"""
if not ignore_keys:
ignore_keys = []
# Extra settings and ignored keys are excluded from validation
data, data_to_exclude_from_validation = migrations.filter_settings_to_validate(
settings_content, ignore_keys
)
result = migrations.normalize(data)
result.update(data_to_exclude_from_validation)
return result
def read_yaml_file(filepath: str = "/etc/cobbler/settings.yaml") -> Dict[str, Any]:
"""
Reads settings files from ``filepath`` and saves the content in a dictionary.
:param filepath: Settings file path, defaults to "/ect/cobbler/settings.yaml"
:raises FileNotFoundError: In case file does not exist or is a directory.
:raises yaml.YAMLError: In case the file is not a valid YAML file.
:return: The aggregated dict of all settings.
"""
if not os.path.isfile(filepath):
raise FileNotFoundError(
f'Given path "{filepath}" does not exist or is a directory.'
)
try:
with open(filepath, encoding="UTF-8") as main_settingsfile:
filecontent: Dict[str, Any] = yaml.safe_load(main_settingsfile.read())
except yaml.YAMLError as error:
traceback.print_exc()
raise yaml.YAMLError(f'"{filepath}" is not a valid YAML file') from error
return filecontent
def read_settings_file(
filepath: str = "/etc/cobbler/settings.yaml",
ignore_keys: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""
Utilizes ``read_yaml_file()``. If the read settings file is invalid in the context of Cobbler we will return an
empty dictionary.
:param filepath: The path to the settings file.
:param ignore_keys: The list of ignore keys to exclude from validation.
:raises SchemaMissingKeyError: In case keys are minssing.
:raises SchemaWrongKeyError: In case keys are not listed in the schema.
:raises SchemaError: In case the schema is wrong.
:return: A dictionary with the settings. As a word of caution: This may not represent a correct settings object, it
will only contain a correct YAML representation.
"""
if not ignore_keys:
ignore_keys = []
filecontent = read_yaml_file(filepath)
# FIXME: Do not call validate_settings() because of chicken - egg problem
try:
validate_settings(filecontent, ignore_keys)
except SchemaMissingKeyError:
logging.exception("Settings file was not returned due to missing keys.")
logging.debug('The settings to read were: "%s"', filecontent)
return {}
except SchemaWrongKeyError:
logging.exception("Settings file was returned due to an error in the schema.")
logging.debug('The settings to read were: "%s"', filecontent)
return {}
except SchemaError:
logging.exception("Settings file was returned due to an error in the schema.")
logging.debug('The settings to read were: "%s"', filecontent)
return {}
return filecontent
def update_settings_file(
data: Dict[str, Any],
filepath: str = "/etc/cobbler/settings.yaml",
ignore_keys: Optional[List[str]] = None,
) -> bool:
"""
Write data handed to this function into the settings file of Cobbler. This function overwrites the existing content.
It will only write valid settings. If you are trying to save invalid data this will raise a SchemaException
described in :py:meth:`cobbler.settings.validate`.
:param data: The data to put into the settings file.
:param filepath: This sets the path of the settingsfile to write.
:param ignore_keys: The list of ignore keys to exclude from validation.
:return: True if the action succeeded. Otherwise return False.
"""
if not ignore_keys:
ignore_keys = []
# Backup old settings file
path = pathlib.Path(filepath)
if path.exists():
timestamp = str(datetime.datetime.now().strftime("%Y%m%d_%H-%M-%S"))
shutil.copy(path, path.parent.joinpath(f"{path.stem}_{timestamp}{path.suffix}"))
try:
validated_data = validate_settings(data, ignore_keys)
version = migrations.get_installed_version()
# If "ignore_keys" was set during migration, we persist these keys as "extra_settings_list"
# in the final settings, so the migrated settings are able to validate later
if ignore_keys or "extra_settings_list" in validated_data:
if "extra_settings_list" in validated_data:
validated_data["extra_settings_list"].extend(ignore_keys)
# Remove items from "extra_settings_list" in case it is now a valid settings
current_schema = list(
map(
lambda x: getattr(x, "_schema", x),
migrations.VERSION_LIST[version].schema._schema.keys(),
)
)
validated_data["extra_settings_list"] = [
x
for x in validated_data["extra_settings_list"]
if x not in current_schema
]
else:
validated_data["extra_settings_list"] = ignore_keys
validated_data["extra_settings_list"] = list(
set(validated_data["extra_settings_list"])
)
with open(filepath, "w", encoding="UTF-8") as settings_file:
yaml_dump = yaml.safe_dump(validated_data)
header = "# Cobbler settings file\n"
header += "# Docs for this file can be found at: https://cobbler.readthedocs.io/en/latest/cobbler-conf.html"
header += "\n\n"
yaml_dump = header + yaml_dump
settings_file.write(yaml_dump)
return True
except SchemaMissingKeyError:
logging.exception(
"Settings file was not written to the disc due to missing keys."
)
logging.debug('The settings to write were: "%s"', data)
return False
except SchemaError:
logging.exception(
"Settings file was not written to the disc due to an error in the schema."
)
logging.debug('The settings to write were: "%s"', data)
return False
def migrate(
yaml_dict: Dict[str, Any],
settings_path: Path,
ignore_keys: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""
Migrates the current settings
:param yaml_dict: The settings dict
:param settings_path: The settings path
:param ignore_keys: The list of ignore keys to exclude from migration.
:return: The migrated settings
"""
if not ignore_keys:
ignore_keys = []
# Extra settings and ignored keys are excluded from validation
data, data_to_exclude_from_validation = migrations.filter_settings_to_validate(
yaml_dict, ignore_keys
)
result = migrations.migrate(data, settings_path)
result.update(data_to_exclude_from_validation)
return result
| 21,587
|
Python
|
.py
| 525
| 31.807619
| 120
| 0.605873
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,127
|
V3_1_0.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_1_0.py
|
"""
Migration from V3.0.1 to V3.1.0
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import SchemaError # type: ignore
from cobbler.settings.migrations import V3_0_1
# schema identical to V3_0_1
schema = V3_0_1.schema
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V3.1.0 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_0_1.validate(settings):
raise SchemaError("V3.0.1: Schema error while validating")
return normalize(settings)
| 1,536
|
Python
|
.py
| 40
| 34.25
| 92
| 0.7139
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,128
|
V3_3_2.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_3_2.py
|
"""
Migration from V3.3.1 to V3.3.2
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2022 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import SchemaError # type: ignore
from cobbler.settings.migrations import V3_3_1
schema = V3_3_1.schema
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference V3.3.1 schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to version V3.3.1 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_3_1.validate(settings):
raise SchemaError("V3.3.1: Schema error while validating")
# rename keys and update their value
# add missing keys
# name - value pairs
return normalize(settings)
| 1,545
|
Python
|
.py
| 41
| 33.243902
| 99
| 0.70612
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,129
|
V3_3_3.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_3_3.py
|
"""
Migration from V3.3.2 to V3.3.3
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2022 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import Optional, Schema, SchemaError # type: ignore
schema = Schema(
{
"auto_migrate_settings": bool,
"allow_duplicate_hostnames": bool,
"allow_duplicate_ips": bool,
"allow_duplicate_macs": bool,
"allow_dynamic_settings": bool,
"always_write_dhcp_entries": bool,
"anamon_enabled": bool,
"auth_token_expiration": int,
"authn_pam_service": str,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
"bind_zonefile_path": str,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional(
"bootloaders_formats",
default={
"aarch64": {"binary_name": "grubaa64.efi"},
"arm": {"binary_name": "bootarm.efi"},
"arm64-efi": {
"binary_name": "grubaa64.efi",
"extra_modules": ["efinet"],
},
"i386": {"binary_name": "bootia32.efi"},
"i386-pc-pxe": {
"binary_name": "grub.0",
"mod_dir": "i386-pc",
"extra_modules": ["chain", "pxe", "biosdisk"],
},
"i686": {"binary_name": "bootia32.efi"},
"IA64": {"binary_name": "bootia64.efi"},
"powerpc-ieee1275": {
"binary_name": "grub.ppc64le",
"extra_modules": ["net", "ofnet"],
},
"x86_64-efi": {
"binary_name": "grubx86.efi",
"extra_modules": ["chain", "efinet"],
},
},
): dict,
Optional(
"bootloaders_modules",
default=[
"btrfs",
"ext2",
"xfs",
"jfs",
"reiserfs",
"all_video",
"boot",
"cat",
"configfile",
"echo",
"fat",
"font",
"gfxmenu",
"gfxterm",
"gzio",
"halt",
"iso9660",
"jpeg",
"linux",
"loadenv",
"minicmd",
"normal",
"part_apple",
"part_gpt",
"part_msdos",
"password_pbkdf2",
"png",
"reboot",
"search",
"search_fs_file",
"search_fs_uuid",
"search_label",
"sleep",
"test",
"true",
"video",
"mdraid09",
"mdraid1x",
"lvm",
"serial",
"regexp",
"tr",
"tftp",
"http",
"luks",
"gcry_rijndael",
"gcry_sha1",
"gcry_sha256",
],
): list,
Optional("bootloaders_shim_folder", default="@@shim_folder@@"): str,
Optional("bootloaders_shim_file", default="@@shim_file@@"): str,
Optional("bootloaders_ipxe_folder", default="@@ipxe_folder@@"): str,
Optional("syslinux_dir", default="@@syslinux_dir@@"): str,
Optional("syslinux_memdisk_folder", default="@@memdisk_folder@@"): str,
Optional("syslinux_pxelinux_folder", default="@@pxelinux_folder@@"): str,
Optional("grub2_mod_dir", default="/usr/share/grub"): str,
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"build_reporting_enabled": bool,
"build_reporting_email": [str],
"build_reporting_ignorelist": [str],
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"cheetah_import_whitelist": [str],
"client_use_https": bool,
"client_use_localhost": bool,
Optional("cobbler_master", default=""): str,
Optional("convert_server_to_ip", default=False): bool,
"createrepo_flags": str,
"autoinstall": str,
"default_name_servers": [str],
"default_name_servers_search": [str],
"default_ownership": [str],
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": float,
"default_virt_ram": int,
"default_virt_type": str,
"enable_ipxe": bool,
"enable_menu": bool,
Optional("extra_settings_list", default=[]): [str],
"http_port": int,
"include": [str],
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
Optional("jinja2_includedir", default="/var/lib/cobbler/jinja2"): str,
"kernel_options": dict,
Optional("ldap_anonymous_bind", default=True): bool,
Optional("ldap_base_dn", default="DC=devel,DC=redhat,DC=com"): str,
Optional("ldap_port", default=389): int,
Optional("ldap_search_bind_dn", default=""): str,
Optional("ldap_search_passwd", default=""): str,
Optional("ldap_search_prefix", default="uid="): str,
Optional("ldap_server", default="grimlock.devel.redhat.com"): str,
Optional("ldap_tls", default=True): bool,
Optional("ldap_tls_cacertdir", default=""): str,
Optional("ldap_tls_cacertfile", default=""): str,
Optional("ldap_tls_certfile", default=""): str,
Optional("ldap_tls_keyfile", default=""): str,
Optional("ldap_tls_reqcert", default=""): str,
Optional("ldap_tls_cipher_suite", default=""): str,
Optional("bind_manage_ipmi", default=False): bool,
# TODO: Remove following line
"manage_dhcp": bool,
"manage_dhcp_v4": bool,
"manage_dhcp_v6": bool,
"manage_dns": bool,
"manage_forward_zones": [str],
"manage_reverse_zones": [str],
Optional("manage_genders", False): bool,
"manage_rsync": bool,
"manage_tftpd": bool,
"mgmt_classes": [str],
# TODO: Validate Subdict
"mgmt_parameters": dict,
"next_server_v4": str,
"next_server_v6": str,
Optional("nsupdate_enabled", False): bool,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional("nsupdate_tsig_key", default=[]): [str],
"power_management_default_type": str,
Optional("proxies", default=[]): [str],
"proxy_url_ext": str,
"proxy_url_int": str,
"puppet_auto_setup": bool,
Optional("puppet_parameterized_classes", default=True): bool,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"puppetca_path": str,
"pxe_just_once": bool,
"nopxe_with_triggers": bool,
"redhat_management_permissive": bool,
"redhat_management_server": str,
"redhat_management_key": str,
"register_new_installs": bool,
"remove_old_puppet_certs_automatically": bool,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"reposync_rsync_flags": str,
"restart_dhcp": bool,
"restart_dns": bool,
"run_install_triggers": bool,
"scm_track_enabled": bool,
"scm_track_mode": str,
"scm_track_author": str,
"scm_push_script": str,
"serializer_pretty_json": bool,
"server": str,
"sign_puppet_certs_automatically": bool,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"tftpboot_location": str,
"virt_auto_boot": bool,
"webdir": str,
"webdir_whitelist": [str],
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": bool,
"yumdownloader_flags": str,
Optional("windows_enabled", default=False): bool,
Optional("windows_template_dir", default="/etc/cobbler/windows"): str,
Optional("samba_distro_share", default="DISTRO"): str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference V3.3.1 schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to version V3.3.1 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
# rename keys and update their value
# add missing keys
# name - value pairs
settings["default_virt_file_size"] = float(
settings.get("default_virt_file_size", 5.0)
)
if not validate(settings):
raise SchemaError("V3.3.1: Schema error while validating")
return normalize(settings)
| 10,302
|
Python
|
.py
| 265
| 28.275472
| 99
| 0.544357
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,130
|
V3_3_1.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_3_1.py
|
"""
Migration from V3.3.0 to V3.3.1
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import Optional, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V3_3_0, helper
schema = Schema(
{
"auto_migrate_settings": bool,
"allow_duplicate_hostnames": bool,
"allow_duplicate_ips": bool,
"allow_duplicate_macs": bool,
"allow_dynamic_settings": bool,
"always_write_dhcp_entries": bool,
"anamon_enabled": bool,
"auth_token_expiration": int,
"authn_pam_service": str,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
"bind_zonefile_path": str,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional(
"bootloaders_formats",
default={
"aarch64": {"binary_name": "grubaa64.efi"},
"arm": {"binary_name": "bootarm.efi"},
"arm64-efi": {
"binary_name": "grubaa64.efi",
"extra_modules": ["efinet"],
},
"i386": {"binary_name": "bootia32.efi"},
"i386-pc-pxe": {
"binary_name": "grub.0",
"mod_dir": "i386-pc",
"extra_modules": ["chain", "pxe", "biosdisk"],
},
"i686": {"binary_name": "bootia32.efi"},
"IA64": {"binary_name": "bootia64.efi"},
"powerpc-ieee1275": {
"binary_name": "grub.ppc64le",
"extra_modules": ["net", "ofnet"],
},
"x86_64-efi": {
"binary_name": "grubx86.efi",
"extra_modules": ["chain", "efinet"],
},
},
): dict,
Optional(
"bootloaders_modules",
default=[
"btrfs",
"ext2",
"xfs",
"jfs",
"reiserfs",
"all_video",
"boot",
"cat",
"configfile",
"echo",
"fat",
"font",
"gfxmenu",
"gfxterm",
"gzio",
"halt",
"iso9660",
"jpeg",
"linux",
"loadenv",
"minicmd",
"normal",
"part_apple",
"part_gpt",
"part_msdos",
"password_pbkdf2",
"png",
"reboot",
"search",
"search_fs_file",
"search_fs_uuid",
"search_label",
"sleep",
"test",
"true",
"video",
"mdraid09",
"mdraid1x",
"lvm",
"serial",
"regexp",
"tr",
"tftp",
"http",
"luks",
"gcry_rijndael",
"gcry_sha1",
"gcry_sha256",
],
): list,
Optional("bootloaders_shim_folder", default="/usr/share/efi/*/"): str,
Optional("bootloaders_shim_file", default=r"shim\.efi"): str,
Optional("bootloaders_ipxe_folder", default="/usr/share/ipxe/"): str,
Optional("syslinux_dir", default="/usr/share/syslinux"): str,
Optional("syslinux_memdisk_folder", default="/usr/share/syslinux"): str,
Optional("syslinux_pxelinux_folder", default="/usr/share/syslinux"): str,
Optional("grub2_mod_dir", default="/usr/share/grub"): str,
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"build_reporting_enabled": bool,
"build_reporting_email": [str],
"build_reporting_ignorelist": [str],
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"cheetah_import_whitelist": [str],
"client_use_https": bool,
"client_use_localhost": bool,
Optional("cobbler_master", default=""): str,
Optional("convert_server_to_ip", default=False): bool,
"createrepo_flags": str,
"autoinstall": str,
"default_name_servers": [str],
"default_name_servers_search": [str],
"default_ownership": [str],
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": int,
"default_virt_ram": int,
"default_virt_type": str,
"enable_ipxe": bool,
"enable_menu": bool,
Optional("extra_settings_list", default=[]): [str],
"http_port": int,
"include": [str],
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
Optional("jinja2_includedir", default="/var/lib/cobbler/jinja2"): str,
"kernel_options": dict,
Optional("ldap_anonymous_bind", default=True): bool,
Optional("ldap_base_dn", default="DC=devel,DC=redhat,DC=com"): str,
Optional("ldap_port", default=389): int,
Optional("ldap_search_bind_dn", default=""): str,
Optional("ldap_search_passwd", default=""): str,
Optional("ldap_search_prefix", default="uid="): str,
Optional("ldap_server", default="grimlock.devel.redhat.com"): str,
Optional("ldap_tls", default=True): bool,
Optional("ldap_tls_cacertdir", default=""): str,
Optional("ldap_tls_cacertfile", default=""): str,
Optional("ldap_tls_certfile", default=""): str,
Optional("ldap_tls_keyfile", default=""): str,
Optional("ldap_tls_reqcert", default=""): str,
Optional("ldap_tls_cipher_suite", default=""): str,
Optional("bind_manage_ipmi", default=False): bool,
# TODO: Remove following line
"manage_dhcp": bool,
"manage_dhcp_v4": bool,
"manage_dhcp_v6": bool,
"manage_dns": bool,
"manage_forward_zones": [str],
"manage_reverse_zones": [str],
Optional("manage_genders", False): bool,
"manage_rsync": bool,
"manage_tftpd": bool,
"mgmt_classes": [str],
# TODO: Validate Subdict
"mgmt_parameters": dict,
"next_server_v4": str,
"next_server_v6": str,
Optional("nsupdate_enabled", False): bool,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional("nsupdate_tsig_key", default=[]): [str],
"power_management_default_type": str,
"proxy_url_ext": str,
"proxy_url_int": str,
"puppet_auto_setup": bool,
Optional("puppet_parameterized_classes", default=True): bool,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"puppetca_path": str,
"pxe_just_once": bool,
"nopxe_with_triggers": bool,
"redhat_management_permissive": bool,
"redhat_management_server": str,
"redhat_management_key": str,
"register_new_installs": bool,
"remove_old_puppet_certs_automatically": bool,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"reposync_rsync_flags": str,
"restart_dhcp": bool,
"restart_dns": bool,
"run_install_triggers": bool,
"scm_track_enabled": bool,
"scm_track_mode": str,
"scm_track_author": str,
"scm_push_script": str,
"serializer_pretty_json": bool,
"server": str,
"sign_puppet_certs_automatically": bool,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"tftpboot_location": str,
"virt_auto_boot": bool,
"webdir": str,
"webdir_whitelist": [str],
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": bool,
"yumdownloader_flags": str,
Optional("windows_enabled", default=False): bool,
Optional("windows_template_dir", default="/etc/cobbler/windows"): str,
Optional("samba_distro_share", default="DISTRO"): str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference V3.3.1 schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to version V3.3.1 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_3_0.validate(settings):
raise SchemaError("V3.3.0: Schema error while validating")
# rename keys and update their value
# add missing keys
# name - value pairs
missing_keys = {
"auto_migrate_settings": True,
"ldap_tls_cacertdir": "",
"ldap_tls_reqcert": "hard",
"ldap_tls_cipher_suite": "",
"bootloaders_shim_folder": "/usr/share/efi/*/",
"bootloaders_shim_file": r"shim\.efi",
"bootloaders_ipxe_folder": "/usr/share/ipxe/",
"syslinux_memdisk_folder": "/usr/share/syslinux",
"syslinux_pxelinux_folder": "/usr/share/syslinux",
}
for key, value in missing_keys.items():
new_setting = helper.Setting(key, value)
helper.key_add(new_setting, settings)
# delete removed keys
return normalize(settings)
| 10,830
|
Python
|
.py
| 277
| 28.617329
| 99
| 0.549502
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,131
|
V3_3_0.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_3_0.py
|
"""
Migration from V3.2.1 to V3.3.0
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
import glob
import ipaddress
import json
import os
import socket
from typing import Any, Dict
from schema import Optional, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V3_2_1, helper
schema = Schema(
{
"auto_migrate_settings": bool,
"allow_duplicate_hostnames": bool,
"allow_duplicate_ips": bool,
"allow_duplicate_macs": bool,
"allow_dynamic_settings": bool,
"always_write_dhcp_entries": bool,
"anamon_enabled": bool,
"auth_token_expiration": int,
"authn_pam_service": str,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
"bind_zonefile_path": str,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional(
"bootloaders_formats",
default={
"aarch64": {"binary_name": "grubaa64.efi"},
"arm": {"binary_name": "bootarm.efi"},
"arm64-efi": {
"binary_name": "grubaa64.efi",
"extra_modules": ["efinet"],
},
"i386": {"binary_name": "bootia32.efi"},
"i386-pc-pxe": {
"binary_name": "grub.0",
"mod_dir": "i386-pc",
"extra_modules": ["chain", "pxe", "biosdisk"],
},
"i686": {"binary_name": "bootia32.efi"},
"IA64": {"binary_name": "bootia64.efi"},
"powerpc-ieee1275": {
"binary_name": "grub.ppc64le",
"extra_modules": ["net", "ofnet"],
},
"x86_64-efi": {
"binary_name": "grubx86.efi",
"extra_modules": ["chain", "efinet"],
},
},
): dict,
Optional(
"bootloaders_modules",
default=[
"btrfs",
"ext2",
"xfs",
"jfs",
"reiserfs",
"all_video",
"boot",
"cat",
"configfile",
"echo",
"fat",
"font",
"gfxmenu",
"gfxterm",
"gzio",
"halt",
"iso9660",
"jpeg",
"linux",
"loadenv",
"minicmd",
"normal",
"part_apple",
"part_gpt",
"part_msdos",
"password_pbkdf2",
"png",
"reboot",
"search",
"search_fs_file",
"search_fs_uuid",
"search_label",
"sleep",
"test",
"true",
"video",
"mdraid09",
"mdraid1x",
"lvm",
"serial",
"regexp",
"tr",
"tftp",
"http",
"luks",
"gcry_rijndael",
"gcry_sha1",
"gcry_sha256",
],
): list,
Optional("syslinux_dir", default="/usr/share/syslinux"): str,
Optional("grub2_mod_dir", default="/usr/share/grub"): str,
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"build_reporting_enabled": bool,
"build_reporting_email": [str],
"build_reporting_ignorelist": [str],
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"cheetah_import_whitelist": [str],
"client_use_https": bool,
"client_use_localhost": bool,
Optional("cobbler_master", default=""): str,
Optional("convert_server_to_ip", default=False): bool,
"createrepo_flags": str,
"autoinstall": str,
"default_name_servers": [str],
"default_name_servers_search": [str],
"default_ownership": [str],
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": int,
"default_virt_ram": int,
"default_virt_type": str,
"enable_ipxe": bool,
"enable_menu": bool,
Optional("extra_settings_list", default=[]): [str],
"http_port": int,
"include": [str],
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
Optional("jinja2_includedir", default="/var/lib/cobbler/jinja2"): str,
"kernel_options": dict,
"ldap_anonymous_bind": bool,
"ldap_base_dn": str,
"ldap_port": int,
"ldap_search_bind_dn": str,
"ldap_search_passwd": str,
"ldap_search_prefix": str,
"ldap_server": str,
"ldap_tls": bool,
"ldap_tls_cacertfile": str,
"ldap_tls_certfile": str,
"ldap_tls_keyfile": str,
Optional("bind_manage_ipmi", default=False): bool,
# TODO: Remove following line
"manage_dhcp": bool,
"manage_dhcp_v4": bool,
"manage_dhcp_v6": bool,
"manage_dns": bool,
"manage_forward_zones": [str],
"manage_reverse_zones": [str],
Optional("manage_genders", False): bool,
"manage_rsync": bool,
"manage_tftpd": bool,
"mgmt_classes": [str],
# TODO: Validate Subdict
"mgmt_parameters": dict,
"next_server_v4": str,
"next_server_v6": str,
Optional("nsupdate_enabled", False): bool,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional("nsupdate_tsig_key", default=[]): [str],
"power_management_default_type": str,
"proxy_url_ext": str,
"proxy_url_int": str,
"puppet_auto_setup": bool,
Optional("puppet_parameterized_classes", default=True): bool,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"puppetca_path": str,
"pxe_just_once": bool,
"nopxe_with_triggers": bool,
"redhat_management_permissive": bool,
"redhat_management_server": str,
"redhat_management_key": str,
"register_new_installs": bool,
"remove_old_puppet_certs_automatically": bool,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"reposync_rsync_flags": str,
"restart_dhcp": bool,
"restart_dns": bool,
"run_install_triggers": bool,
"scm_track_enabled": bool,
"scm_track_mode": str,
"scm_track_author": str,
"scm_push_script": str,
"serializer_pretty_json": bool,
"server": str,
"sign_puppet_certs_automatically": bool,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"tftpboot_location": str,
"virt_auto_boot": bool,
"webdir": str,
"webdir_whitelist": [str],
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": bool,
"yumdownloader_flags": str,
Optional("windows_enabled", default=False): bool,
Optional("windows_template_dir", default="/etc/cobbler/windows"): str,
Optional("samba_distro_share", default="DISTRO"): str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference V3.3.0 schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to version V3.3.0 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_2_1.validate(settings):
raise SchemaError("V3.2.1: Schema error while validating")
# migrate gpxe -> ipxe
if "enable_gpxe" in settings:
gpxe = helper.key_get("enable_gpxe", settings)
helper.key_rename(gpxe, "enable_ipxe", settings)
# rename keys and update their value
old_setting = helper.Setting(
"default_autoinstall", "/var/lib/cobbler/autoinstall_templates/default.ks"
)
new_setting = helper.Setting("autoinstall", "default.ks")
helper.key_rename(old_setting, "autoinstall", settings)
helper.key_set_value(new_setting, settings)
old_setting = helper.Setting("next_server", "127.0.0.1")
new_setting = helper.Setting("next_server_v4", "127.0.0.1")
helper.key_rename(old_setting, "next_server_v4", settings)
helper.key_set_value(new_setting, settings)
power_type = helper.key_get("power_management_default_type", settings)
if power_type.value == "ipmitool":
new_setting = helper.Setting("power_management_default_type", "ipmilanplus")
helper.key_set_value(new_setting, settings)
# add missing keys
# name - value pairs
missing_keys = {
"auto_migrate_settings": True,
"bind_zonefile_path": "/var/lib/named",
"bootloaders_formats": {
"aarch64": {"binary_name": "grubaa64.efi"},
"arm": {"binary_name": "bootarm.efi"},
"arm64-efi": {"binary_name": "grubaa64.efi", "extra_modules": ["efinet"]},
"i386": {"binary_name": "bootia32.efi"},
"i386-pc-pxe": {
"binary_name": "grub.0",
"mod_dir": "i386-pc",
"extra_modules": ["chain", "pxe", "biosdisk"],
},
"i686": {"binary_name": "bootia32.efi"},
"IA64": {"binary_name": "bootia64.efi"},
"powerpc-ieee1275": {
"binary_name": "grub.ppc64le",
"extra_modules": ["net", "ofnet"],
},
"x86_64-efi": {
"binary_name": "grubx86.efi",
"extra_modules": ["chain", "efinet"],
},
},
"bootloaders_modules": [
"btrfs",
"ext2",
"xfs",
"jfs",
"reiserfs",
"all_video",
"boot",
"cat",
"configfile",
"echo",
"fat",
"font",
"gfxmenu",
"gfxterm",
"gzio",
"halt",
"iso9660",
"jpeg",
"linux",
"loadenv",
"minicmd",
"normal",
"part_apple",
"part_gpt",
"part_msdos",
"password_pbkdf2",
"png",
"reboot",
"search",
"search_fs_file",
"search_fs_uuid",
"search_label",
"sleep",
"test",
"true",
"video",
"mdraid09",
"mdraid1x",
"lvm",
"serial",
"regexp",
"tr",
"tftp",
"http",
"luks",
"gcry_rijndael",
"gcry_sha1",
"gcry_sha256",
],
"grub2_mod_dir": "/usr/share/grub2",
"manage_dhcp_v4": False,
"manage_dhcp_v6": False,
"next_server_v6": "::1",
"syslinux_dir": "/usr/share/syslinux",
}
for key, value in missing_keys.items():
new_setting = helper.Setting(key, value)
helper.key_add(new_setting, settings)
# delete removed keys
helper.key_delete("cache_enabled", settings)
# migrate stored cobbler collections
migrate_cobbler_collections("/var/lib/cobbler/collections/")
return normalize(settings)
def migrate_cobbler_collections(collections_dir: str) -> None:
"""
Manipulate the main Cobbler stored collections and migrate deprecated settings
to work with newer Cobbler versions.
:param collections_dir: The directory of Cobbler where the collections files are.
"""
helper.backup_dir(collections_dir)
for collection_file in glob.glob(
os.path.join(collections_dir, "**/*.json"), recursive=True
):
data = None
with open(collection_file, encoding="utf-8") as _f:
data = json.loads(_f.read())
# null values to empty strings
for key in data:
if data[key] is None:
data[key] = ""
# boot_loader -> boot_loaders (preserving possible existing value)
if "boot_loader" in data and "boot_loaders" not in data:
data["boot_loaders"] = data.pop("boot_loader")
elif "boot_loader" in data:
data.pop("boot_loader")
# next_server -> next_server_v4, next_server_v6
if "next_server" in data:
addr = data["next_server"]
if addr == "<<inherit>>":
data["next_server_v4"] = addr
data["next_server_v6"] = addr
data.pop("next_server")
else:
try:
_ip = ipaddress.ip_address(addr)
if isinstance(_ip, ipaddress.IPv4Address):
data["next_server_v4"] = data.pop("next_server")
elif isinstance(_ip, ipaddress.IPv6Address): # type: ignore
data["next_server_v6"] = data.pop("next_server")
except ValueError:
# next_server is a hostname so we need to resolve hostname
try:
data["next_server_v4"] = socket.getaddrinfo(
addr,
None,
socket.AF_INET,
)[1][4][0]
except OSError:
pass
try:
data["next_server_v6"] = socket.getaddrinfo(
addr,
None,
socket.AF_INET6,
)[1][4][0]
except OSError:
pass
if "next_server_v4" not in data and "next_server_v6" not in data:
print(
"ERROR: Neither IPv4 nor IPv6 addresses can be resolved for "
f"'next server': {data['next_server']}. Please check your DNS configuration."
)
else:
data.pop("next_server")
# enable_gpxe -> enable_ipxe
if "enable_gpxe" in data:
data["enable_ipxe"] = data.pop("enable_gpxe")
# ipmitool power_type -> ipmilan power_type
if "power_type" in data and data["power_type"] == "ipmitool":
data["power_type"] = "ipmilanplus"
# serial_device (str) -> serial_device (int)
if "serial_device" in data and data["serial_device"] == "":
data["serial_device"] = -1
elif "serial_device" in data and isinstance(data["serial_device"], str):
try:
data["serial_device"] = int(data["serial_device"])
except ValueError as exc:
print(f"ERROR casting 'serial_device' attribute to int: {exc}")
# serial_baud_rate (str) -> serial_baud_rate (int)
if "serial_baud_rate" in data and data["serial_baud_rate"] == "":
data["serial_baud_rate"] = -1
elif "serial_baud_rate" in data and isinstance(data["serial_baud_rate"], str):
try:
data["serial_baud_rate"] = int(data["serial_baud_rate"])
except ValueError as exc:
print(f"ERROR casting 'serial_baud_rate' attribute to int: {exc}")
with open(collection_file, "w", encoding="utf-8") as _f:
_f.write(json.dumps(data))
| 17,173
|
Python
|
.py
| 450
| 26.56
| 105
| 0.521333
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,132
|
V3_4_0.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_4_0.py
|
"""
Migration from V3.3.3 to V3.4.0
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2022 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
import configparser
import glob
import json
import os
import pathlib
from configparser import ConfigParser
from typing import Any, Dict
from schema import Optional, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V3_3_5, helper
schema = Schema(
{
Optional("auto_migrate_settings"): bool,
Optional("allow_duplicate_hostnames"): bool,
Optional("allow_duplicate_ips"): bool,
Optional("allow_duplicate_macs"): bool,
Optional("allow_dynamic_settings"): bool,
Optional("always_write_dhcp_entries"): bool,
Optional("anamon_enabled"): bool,
Optional("auth_token_expiration"): int,
Optional("authn_pam_service"): str,
Optional("autoinstall_snippets_dir"): str,
Optional("autoinstall_templates_dir"): str,
Optional("bind_chroot_path"): str,
Optional("bind_zonefile_path"): str,
Optional("bind_master"): str,
Optional("boot_loader_conf_template_dir"): str,
Optional("bootloaders_dir"): str,
Optional("bootloaders_formats"): dict,
Optional("bootloaders_modules"): list,
Optional("bootloaders_shim_folder"): str,
Optional("bootloaders_shim_file"): str,
Optional("secure_boot_grub_folder"): str,
Optional("secure_boot_grub_file"): str,
Optional("bootloaders_ipxe_folder"): str,
Optional("syslinux_dir"): str,
Optional("syslinux_memdisk_folder"): str,
Optional("syslinux_pxelinux_folder"): str,
Optional("grub2_mod_dir"): str,
Optional("grubconfig_dir"): str,
Optional("build_reporting_enabled"): bool,
Optional("build_reporting_email"): [str],
Optional("build_reporting_ignorelist"): [str],
Optional("build_reporting_sender"): str,
Optional("build_reporting_smtp_server"): str,
Optional("build_reporting_subject"): str,
Optional("buildisodir"): str,
Optional("cheetah_import_whitelist"): [str],
Optional("client_use_https"): bool,
Optional("client_use_localhost"): bool,
Optional("cobbler_master"): str,
Optional("convert_server_to_ip"): bool,
Optional("createrepo_flags"): str,
Optional("autoinstall"): str,
Optional("default_name_servers"): [str],
Optional("default_name_servers_search"): [str],
Optional("default_ownership"): [str],
Optional("default_password_crypted"): str,
Optional("default_template_type"): str,
Optional("default_virt_bridge"): str,
Optional("default_virt_disk_driver"): str,
Optional("default_virt_file_size"): float,
Optional("default_virt_ram"): int,
Optional("default_virt_type"): str,
Optional("dnsmasq_ethers_file"): str,
Optional("dnsmasq_hosts_file"): str,
Optional("enable_ipxe"): bool,
Optional("enable_menu"): bool,
Optional("extra_settings_list"): [str],
Optional("http_port"): int,
Optional("iso_template_dir"): str,
Optional("jinja2_includedir"): str,
Optional("kernel_options"): dict,
Optional("ldap_anonymous_bind"): bool,
Optional("ldap_base_dn"): str,
Optional("ldap_port"): int,
Optional("ldap_search_bind_dn"): str,
Optional("ldap_search_passwd"): str,
Optional("ldap_search_prefix"): str,
Optional("ldap_server"): str,
Optional("ldap_tls"): bool,
Optional("ldap_tls_cacertdir"): str,
Optional("ldap_tls_cacertfile"): str,
Optional("ldap_tls_certfile"): str,
Optional("ldap_tls_keyfile"): str,
Optional("ldap_tls_reqcert"): str,
Optional("ldap_tls_cipher_suite"): str,
Optional("bind_manage_ipmi"): bool,
# TODO: Remove following line
Optional("manage_dhcp"): bool,
Optional("manage_dhcp_v4"): bool,
Optional("manage_dhcp_v6"): bool,
Optional("manage_dns"): bool,
Optional("manage_forward_zones"): [str],
Optional("manage_reverse_zones"): [str],
Optional("manage_genders"): bool,
Optional("manage_rsync"): bool,
Optional("manage_tftpd"): bool,
Optional("next_server_v4"): str,
Optional("next_server_v6"): str,
Optional("nsupdate_enabled"): bool,
Optional("nsupdate_log"): str,
Optional("nsupdate_tsig_algorithm"): str,
Optional("nsupdate_tsig_key"): [str],
Optional("power_management_default_type"): str,
Optional("proxies"): [str],
Optional("proxy_url_ext"): str,
Optional("proxy_url_int"): str,
Optional("puppet_auto_setup"): bool,
Optional("puppet_parameterized_classes"): bool,
Optional("puppet_server"): str,
Optional("puppet_version"): int,
Optional("puppetca_path"): str,
Optional("pxe_just_once"): bool,
Optional("nopxe_with_triggers"): bool,
Optional("redhat_management_permissive"): bool,
Optional("redhat_management_server"): str,
Optional("redhat_management_key"): str,
Optional("uyuni_authentication_endpoint"): str,
Optional("register_new_installs"): bool,
Optional("remove_old_puppet_certs_automatically"): bool,
Optional("replicate_repo_rsync_options"): str,
Optional("replicate_rsync_options"): str,
Optional("reposync_flags"): str,
Optional("reposync_rsync_flags"): str,
Optional("restart_dhcp"): bool,
Optional("restart_dns"): bool,
Optional("run_install_triggers"): bool,
Optional("scm_track_enabled"): bool,
Optional("scm_track_mode"): str,
Optional("scm_track_author"): str,
Optional("scm_push_script"): str,
Optional("serializer_pretty_json"): bool,
Optional("server"): str,
Optional("sign_puppet_certs_automatically"): bool,
Optional("signature_path"): str,
Optional("signature_url"): str,
Optional("tftpboot_location"): str,
Optional("virt_auto_boot"): bool,
Optional("webdir"): str,
Optional("webdir_whitelist"): [str],
Optional("xmlrpc_port"): int,
Optional("yum_distro_priority"): int,
Optional("yum_post_install_mirror"): bool,
Optional("yumdownloader_flags"): str,
Optional("windows_enabled"): bool,
Optional("windows_template_dir"): str,
Optional("samba_distro_share"): str,
Optional("modules"): {
Optional("authentication"): {
Optional("module"): str,
Optional("hash_algorithm"): str,
},
Optional("authorization"): {Optional("module"): str},
Optional("dns"): {Optional("module"): str},
Optional("dhcp"): {Optional("module"): str},
Optional("tftpd"): {Optional("module"): str},
Optional("serializers"): {Optional("module"): str},
},
Optional("mongodb"): {
Optional("host"): str,
Optional("port"): int,
},
Optional("cache_enabled"): bool,
Optional("autoinstall_scheme"): str,
Optional("lazy_start"): bool,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference V3.4.0 schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to version V3.4.0 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_3_5.validate(settings):
raise SchemaError("V3.3.5: Schema error while validating")
# rename keys and update their value if needed
include = settings.pop("include")
include = settings.pop("mgmt_classes")
include = settings.pop("mgmt_parameters")
# Do mongodb.conf migration
mongodb_config = "/etc/cobbler/mongodb.conf"
modules_config_parser = ConfigParser()
try:
modules_config_parser.read(mongodb_config)
except configparser.Error as cp_error:
raise configparser.Error(
"Could not read Cobbler MongoDB config file!"
) from cp_error
settings["mongodb"] = {
"host": modules_config_parser.get("connection", "host", fallback="localhost"),
"port": modules_config_parser.getint("connection", "port", fallback=27017),
}
mongodb_config_path = pathlib.Path(mongodb_config)
if mongodb_config_path.exists():
mongodb_config_path.unlink()
# Do mongodb.conf migration
modules_config = "/etc/cobbler/modules.conf"
modules_config_parser = ConfigParser()
try:
modules_config_parser.read(mongodb_config)
except configparser.Error as cp_error:
raise configparser.Error(
"Could not read Cobbler modules.conf config file!"
) from cp_error
settings["modules"] = {
"authentication": {
"module": modules_config_parser.get(
"authentication", "module", fallback="authentication.configfile"
),
"hash_algorithm": modules_config_parser.get(
"authentication", "hash_algorithm", fallback="sha3_512"
),
},
"authorization": {
"module": modules_config_parser.get(
"authorization", "module", fallback="authorization.allowall"
)
},
"dns": {
"module": modules_config_parser.get(
"dns", "module", fallback="managers.bind"
)
},
"dhcp": {
"module": modules_config_parser.get(
"dhcp", "module", fallback="managers.isc"
)
},
"tftpd": {
"module": modules_config_parser.get(
"tftpd", "module", fallback="managers.in_tftpd"
)
},
"serializers": {
"module": modules_config_parser.get(
"serializers", "module", fallback="serializers.file"
)
},
}
modules_config_path = pathlib.Path(modules_config)
if modules_config_path.exists():
modules_config_path.unlink()
# Drop defaults
from cobbler.settings import Settings
helper.key_drop_if_default(settings, Settings().to_dict())
# Write settings to disk
from cobbler.settings import update_settings_file
update_settings_file(settings)
for include_path in include:
include_directory = pathlib.Path(include_path)
if include_directory.is_dir() and include_directory.exists():
include_directory.rmdir()
# migrate stored cobbler collections
migrate_cobbler_collections("/var/lib/cobbler/collections/")
return normalize(settings)
def migrate_cobbler_collections(collections_dir: str) -> None:
"""
Manipulate the main Cobbler stored collections and migrate deprecated settings
to work with newer Cobbler versions.
:param collections_dir: The directory of Cobbler where the collections files are.
"""
helper.backup_dir(collections_dir)
for collection_file in glob.glob(
os.path.join(collections_dir, "**/*.json"), recursive=True
):
data = None
with open(collection_file, encoding="utf-8") as _f:
data = json.loads(_f.read())
# migrate interface.interface_type from emptry string to "NA"
if "interfaces" in data:
for iface in data["interfaces"]:
if data["interfaces"][iface]["interface_type"] == "":
data["interfaces"][iface]["interface_type"] = "NA"
# Remove fetchable_files from the items
if "fetchable_files" in data:
data.pop("fetchable_files", None)
# Migrate boot_files to template_files
if "boot_files" in data and "template_files" in data:
data["template_files"] = {**data["template_files"], **data["boot_files"]}
with open(collection_file, "w", encoding="utf-8") as _f:
_f.write(json.dumps(data))
| 12,884
|
Python
|
.py
| 307
| 33.537459
| 99
| 0.624203
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,133
|
V3_3_4.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_3_4.py
|
"""
Migration from V3.3.3 to V3.3.4
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2024 Enno Gotthold <egotthold@suse.com
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import SchemaError # type: ignore
from cobbler.settings.migrations import V3_3_3
# schema identical to V3_3_3
schema = V3_3_3.schema
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference V3.3.1 schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to version V3.3.4 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_3_3.validate(settings):
raise SchemaError("V3.3.3: Schema error while validating")
# rename keys and update their value
# add missing keys
# name - value pairs
return normalize(settings)
| 1,511
|
Python
|
.py
| 41
| 32.512195
| 99
| 0.707502
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,134
|
V3_3_5.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_3_5.py
|
"""
Migration from V3.3.4 to V3.3.5
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2024 Enno Gotthold <egotthold@suse.com
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import Optional, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V3_3_4
schema = Schema(
{
"auto_migrate_settings": bool,
"allow_duplicate_hostnames": bool,
"allow_duplicate_ips": bool,
"allow_duplicate_macs": bool,
"allow_dynamic_settings": bool,
"always_write_dhcp_entries": bool,
"anamon_enabled": bool,
"auth_token_expiration": int,
"authn_pam_service": str,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
"bind_zonefile_path": str,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional(
"bootloaders_formats",
default={
"aarch64": {"binary_name": "grubaa64.efi"},
"arm": {"binary_name": "bootarm.efi"},
"arm64-efi": {
"binary_name": "grubaa64.efi",
"extra_modules": ["efinet"],
},
"i386": {"binary_name": "bootia32.efi"},
"i386-pc-pxe": {
"binary_name": "grub.0",
"mod_dir": "i386-pc",
"extra_modules": ["chain", "pxe", "biosdisk"],
},
"i686": {"binary_name": "bootia32.efi"},
"IA64": {"binary_name": "bootia64.efi"},
"powerpc-ieee1275": {
"binary_name": "grub.ppc64le",
"extra_modules": ["net", "ofnet"],
},
"x86_64-efi": {
"binary_name": "grubx86.efi",
"extra_modules": ["chain", "efinet"],
},
},
): dict,
Optional(
"bootloaders_modules",
default=[
"btrfs",
"ext2",
"xfs",
"jfs",
"reiserfs",
"all_video",
"boot",
"cat",
"configfile",
"echo",
"fat",
"font",
"gfxmenu",
"gfxterm",
"gzio",
"halt",
"iso9660",
"jpeg",
"linux",
"loadenv",
"minicmd",
"normal",
"part_apple",
"part_gpt",
"part_msdos",
"password_pbkdf2",
"png",
"reboot",
"search",
"search_fs_file",
"search_fs_uuid",
"search_label",
"sleep",
"test",
"true",
"video",
"mdraid09",
"mdraid1x",
"lvm",
"serial",
"regexp",
"tr",
"tftp",
"http",
"luks",
"gcry_rijndael",
"gcry_sha1",
"gcry_sha256",
],
): list,
Optional("bootloaders_shim_folder", default="/usr/share/efi/*/"): str,
Optional("bootloaders_shim_file", default=r"shim\.efi"): str,
Optional("bootloaders_ipxe_folder", default="/usr/share/ipxe/"): str,
Optional("syslinux_dir", default="/usr/share/syslinux"): str,
Optional("syslinux_memdisk_folder", default="/usr/share/syslinux"): str,
Optional("syslinux_pxelinux_folder", default="/usr/share/syslinux"): str,
Optional("grub2_mod_dir", default="/usr/share/grub"): str,
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"build_reporting_enabled": bool,
"build_reporting_email": [str],
"build_reporting_ignorelist": [str],
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"cheetah_import_whitelist": [str],
"client_use_https": bool,
"client_use_localhost": bool,
Optional("cobbler_master", default=""): str,
Optional("convert_server_to_ip", default=False): bool,
"createrepo_flags": str,
"autoinstall": str,
"default_name_servers": [str],
"default_name_servers_search": [str],
"default_ownership": [str],
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": float,
"default_virt_ram": int,
"default_virt_type": str,
"enable_ipxe": bool,
"enable_menu": bool,
Optional("extra_settings_list", default=[]): [str],
"http_port": int,
"include": [str],
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
Optional("jinja2_includedir", default="/var/lib/cobbler/jinja2"): str,
"kernel_options": dict,
Optional("ldap_anonymous_bind", default=True): bool,
Optional("ldap_base_dn", default="DC=devel,DC=redhat,DC=com"): str,
Optional("ldap_port", default=389): int,
Optional("ldap_search_bind_dn", default=""): str,
Optional("ldap_search_passwd", default=""): str,
Optional("ldap_search_prefix", default="uid="): str,
Optional("ldap_server", default="grimlock.devel.redhat.com"): str,
Optional("ldap_tls", default=True): bool,
Optional("ldap_tls_cacertdir", default=""): str,
Optional("ldap_tls_cacertfile", default=""): str,
Optional("ldap_tls_certfile", default=""): str,
Optional("ldap_tls_keyfile", default=""): str,
Optional("ldap_tls_reqcert", default=""): str,
Optional("ldap_tls_cipher_suite", default=""): str,
Optional("bind_manage_ipmi", default=False): bool,
# TODO: Remove following line
"manage_dhcp": bool,
"manage_dhcp_v4": bool,
"manage_dhcp_v6": bool,
"manage_dns": bool,
"manage_forward_zones": [str],
"manage_reverse_zones": [str],
Optional("manage_genders", False): bool,
"manage_rsync": bool,
"manage_tftpd": bool,
"mgmt_classes": [str],
# TODO: Validate Subdict
"mgmt_parameters": dict,
"next_server_v4": str,
"next_server_v6": str,
Optional("nsupdate_enabled", False): bool,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional("nsupdate_tsig_key", default=[]): [str],
"power_management_default_type": str,
Optional("proxies", default=[]): [str],
"proxy_url_ext": str,
"proxy_url_int": str,
"puppet_auto_setup": bool,
Optional("puppet_parameterized_classes", default=True): bool,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"puppetca_path": str,
"pxe_just_once": bool,
"nopxe_with_triggers": bool,
"redhat_management_permissive": bool,
"redhat_management_server": str,
"redhat_management_key": str,
"register_new_installs": bool,
"remove_old_puppet_certs_automatically": bool,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"reposync_rsync_flags": str,
"restart_dhcp": bool,
"restart_dns": bool,
"run_install_triggers": bool,
"scm_track_enabled": bool,
"scm_track_mode": str,
"scm_track_author": str,
"scm_push_script": str,
"serializer_pretty_json": bool,
"server": str,
"sign_puppet_certs_automatically": bool,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"tftpboot_location": str,
"virt_auto_boot": bool,
"webdir": str,
"webdir_whitelist": [str],
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": bool,
"yumdownloader_flags": str,
Optional("windows_enabled", default=False): bool,
Optional("windows_template_dir", default="/etc/cobbler/windows"): str,
Optional("samba_distro_share", default="DISTRO"): str,
Optional("cache_enabled"): bool,
Optional("lazy_start", default=False): bool,
},
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference V3.3.1 schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to version V3.3.4 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_3_4.validate(settings):
raise SchemaError("V3.3.3: Schema error while validating")
# rename keys and update their value
# add missing keys
# name - value pairs
return normalize(settings)
| 10,270
|
Python
|
.py
| 264
| 28.276515
| 99
| 0.5457
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,135
|
V3_2_1.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_2_1.py
|
"""
Migration from V3.2.0 to V3.2.1
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
import os
from typing import Any, Dict
from schema import Optional, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V3_2_0, helper
from cobbler.utils import input_converters
schema = Schema(
{
Optional("auto_migrate_settings", default=True): bool,
"allow_duplicate_hostnames": bool,
"allow_duplicate_ips": bool,
"allow_duplicate_macs": bool,
"allow_dynamic_settings": bool,
"always_write_dhcp_entries": bool,
"anamon_enabled": bool,
"auth_token_expiration": int,
"authn_pam_service": str,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"build_reporting_enabled": bool,
"build_reporting_email": [str],
"build_reporting_ignorelist": [str],
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"cache_enabled": bool,
"cheetah_import_whitelist": [str],
"client_use_https": bool,
"client_use_localhost": bool,
Optional("cobbler_master", default=""): str,
Optional("convert_server_to_ip", default=False): bool,
"createrepo_flags": str,
"default_autoinstall": str,
"default_name_servers": [str],
"default_name_servers_search": [str],
"default_ownership": [str],
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": int,
"default_virt_ram": int,
"default_virt_type": str,
"enable_gpxe": bool,
"enable_menu": bool,
Optional("extra_settings_list", default=[]): [str],
"http_port": int,
"include": [str],
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
Optional("jinja2_includedir", default="/var/lib/cobbler/jinja2"): str,
"kernel_options": dict,
"ldap_anonymous_bind": bool,
"ldap_base_dn": str,
"ldap_port": int,
"ldap_search_bind_dn": str,
"ldap_search_passwd": str,
"ldap_search_prefix": str,
"ldap_server": str,
"ldap_tls": bool,
"ldap_tls_cacertfile": str,
"ldap_tls_certfile": str,
"ldap_tls_keyfile": str,
Optional("bind_manage_ipmi", default=False): bool,
"manage_dhcp": bool,
"manage_dns": bool,
"manage_forward_zones": [str],
"manage_reverse_zones": [str],
Optional("manage_genders", False): bool,
"manage_rsync": bool,
"manage_tftpd": bool,
"mgmt_classes": [str],
# TODO: Validate Subdict
"mgmt_parameters": dict,
"next_server": str,
Optional("nsupdate_enabled", False): bool,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional("nsupdate_tsig_key", default=[]): [str],
"power_management_default_type": str,
"proxy_url_ext": str,
"proxy_url_int": str,
"puppet_auto_setup": bool,
Optional("puppet_parameterized_classes", default=True): bool,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"puppetca_path": str,
"pxe_just_once": bool,
"nopxe_with_triggers": bool,
"redhat_management_permissive": bool,
"redhat_management_server": str,
"redhat_management_key": str,
"register_new_installs": bool,
"remove_old_puppet_certs_automatically": bool,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"reposync_rsync_flags": str,
"restart_dhcp": bool,
"restart_dns": bool,
"run_install_triggers": bool,
"scm_track_enabled": bool,
"scm_track_mode": str,
"scm_track_author": str,
"scm_push_script": str,
"serializer_pretty_json": bool,
"server": str,
"sign_puppet_certs_automatically": bool,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"tftpboot_location": str,
"virt_auto_boot": bool,
"webdir": str,
"webdir_whitelist": [str],
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": bool,
"yumdownloader_flags": str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V3.2.1 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_2_0.validate(settings):
raise SchemaError("V3.2.0: Schema error while validating")
# int bool to real bool conversion
bool_values = [
"allow_duplicate_hostnames",
"allow_duplicate_ips",
"allow_duplicate_macs",
"allow_duplicate_macs",
"allow_dynamic_settings",
"always_write_dhcp_entries",
"anamon_enabled",
"bind_manage_ipmi",
"build_reporting_enabled",
"cache_enabled",
"client_use_https",
"client_use_localhost",
"convert_server_to_ip",
"enable_gpxe",
"enable_menu",
"ldap_anonymous_bind",
"ldap_tls",
"manage_dhcp",
"manage_dns",
"manage_genders",
"manage_rsync",
"manage_tftp",
"manage_tftpd",
"nopxe_with_triggers",
"nsupdate_enabled",
"puppet_auto_setup",
"puppet_parameterized_classes",
"pxe_just_once",
"redhat_management_permissive",
"register_new_installs",
"remove_old_puppet_certs_automatically",
"restart_dhcp",
"restart_dns",
"run_install_triggers",
"scm_track_enabled",
"serializer_pretty_json",
"sign_puppet_certs_automatically",
"virt_auto_boot",
"yum_post_install_mirror",
]
for key in bool_values:
if key in settings:
settings[key] = input_converters.input_boolean(settings[key])
mgmt_parameters = "mgmt_parameters"
if mgmt_parameters in settings and "from_cobbler" in settings[mgmt_parameters]:
settings[mgmt_parameters]["from_cobbler"] = input_converters.input_boolean(
settings[mgmt_parameters]["from_cobbler"]
)
# proxy_url_ext -> None to ''
if settings["proxy_url_ext"] is None:
settings["proxy_url_ext"] = ""
# delete removed keys
deleted_keys = ["manage_tftp"]
for key in deleted_keys:
if key in settings:
helper.key_delete(key, settings)
# rename old settings filename
filename = "/etc/cobbler/settings"
if os.path.exists(filename):
os.rename(filename, filename + ".yaml")
filename += ".yaml"
return normalize(settings)
| 8,573
|
Python
|
.py
| 228
| 29.688596
| 92
| 0.613093
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,136
|
__init__.py
|
cobbler_cobbler/cobbler/settings/migrations/__init__.py
|
"""
The name of the migration file is the target version.
One migration should update from version x to x + 1, where X is any Cobbler version and the migration updates to
any next version (e.g. 3.2.1 to 3.3.0).
The validation of the current version is in the file with the name of the version.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: 2022 Pablo Suárez Hernández <psuarezhernandez@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
# This module should not try to read the settings from the disk. --> Responsibility from settings/__init__.py
# If this module needs to check the existence of a file, the path should be handed as an argument to the function.
# Any migrations of modules.conf should be ignored for now.
import configparser
import glob
import logging
import os
from importlib import import_module
from inspect import signature
from pathlib import Path
from types import ModuleType
from typing import Any, Dict, List, Optional, Tuple, Union
from schema import Schema # type: ignore
import cobbler
logger = logging.getLogger()
migrations_path = os.path.normpath(
os.path.join(
os.path.abspath(os.path.dirname(cobbler.__file__)), "settings/migrations"
)
)
class CobblerVersion:
"""
Specifies a Cobbler Version
"""
def __init__(self, major: int = 0, minor: int = 0, patch: int = 0) -> None:
"""
Constructor
"""
self.major = int(major)
self.minor = int(minor)
self.patch = int(patch)
def __eq__(self, other: object) -> bool:
"""
Compares 2 CobblerVersion objects for equality. Necesarry for the tests.
"""
if not isinstance(other, CobblerVersion):
return False
return (
self.major == other.major
and self.minor == other.minor
and self.patch == other.patch
)
def __ne__(self, other: object) -> bool:
return not self.__eq__(other)
def __lt__(self, other: object) -> bool:
if not isinstance(other, CobblerVersion):
raise TypeError
if self.major < other.major:
return True
if self.major.__eq__(other.major):
if self.minor < other.minor:
return True
if self.minor.__eq__(other.minor):
if self.patch < other.patch:
return True
return False
def __le__(self, other: object) -> bool:
if self.__lt__(other) or self.__eq__(other):
return True
return False
def __gt__(self, other: object) -> bool:
if not isinstance(other, CobblerVersion):
raise TypeError
if self.major > other.major:
return True
if self.major.__eq__(other.major):
if self.minor > other.minor:
return True
if self.minor.__eq__(other.minor):
if self.patch > other.patch:
return True
return False
def __ge__(self, other: object) -> bool:
if self.__gt__(other) or self.__eq__(other):
return True
return False
def __hash__(self) -> int:
return hash((self.major, self.minor, self.patch))
def __str__(self) -> str:
return f"CobblerVersion: {self.major}.{self.minor}.{self.patch}"
def __repr__(self) -> str:
return f"CobblerVersion(major={self.major}, minor={self.minor}, patch={self.patch})"
EMPTY_VERSION = CobblerVersion()
VERSION_LIST: Dict[CobblerVersion, ModuleType] = {}
def __validate_module(name: ModuleType) -> bool:
"""
Validates a given module according to certain criteria:
* module must have certain methods implemented
* those methods must have a certain signature
:param name: The name of the module to validate.
:param version: The migration version as List.
:return: True if every criteria is met otherwise False.
"""
module_methods = {
"validate": "(settings:Dict[str,Any])->bool",
"normalize": "(settings:Dict[str,Any])->Dict[str,Any]",
"migrate": "(settings:Dict[str,Any])->Dict[str,Any]",
}
for key, value in module_methods.items():
if not hasattr(name, key):
return False
sig = str(signature(getattr(name, key))).replace(" ", "")
if value != sig:
return False
return True
def __load_migration_modules(name: str, version: List[str]) -> None:
"""
Loads migration specific modules and if valid adds it to ``VERSION_LIST``.
:param name: The name of the module to load.
:param version: The migration version as list.
"""
module = import_module(f"cobbler.settings.migrations.{name}")
logger.info("Loaded migrations: %s", name)
if __validate_module(module):
version_list_int = [int(i) for i in version]
VERSION_LIST[CobblerVersion(*version_list_int)] = module
else:
logger.warning('Exception raised when loading migrations module "%s"', name)
def filter_settings_to_validate(
settings: Dict[str, Any], ignore_keys: Optional[List[str]] = None
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""
Separate settings to validate from the ones to exclude from validation
according to "ignore_keys" parameter and "extra_settings_list" setting value.
:param settings: The settings dict to validate.
:param ignore_keys: The list of ignore keys to exclude from validation.
:return data: The filtered settings to validate
:return data_to_exclude: The settings that were excluded from the validation
"""
if not ignore_keys:
ignore_keys = []
extra_settings = settings.get("extra_settings_list", [])
data_to_exclude = {
k: settings[k] for k in settings if k in ignore_keys or k in extra_settings
}
data = {
x: settings[x]
for x in settings
if x not in ignore_keys and x not in extra_settings
}
return data, data_to_exclude
def get_settings_file_version(
yaml_dict: Dict[str, Any], ignore_keys: Optional[List[str]] = None
) -> CobblerVersion:
"""
Return the correspondig version of the given settings dict.
:param yaml_dict: The settings dict to get the version from.
:param ignore_keys: The list of ignore keys to exclude from validation.
:return: The discovered Cobbler Version or ``EMPTY_VERSION``
"""
if not ignore_keys:
ignore_keys = []
# Extra settings and ignored keys are excluded from validation
data, _ = filter_settings_to_validate(yaml_dict, ignore_keys)
for version, module_name in VERSION_LIST.items():
if module_name.validate(data):
return version
return EMPTY_VERSION
def get_installed_version(
filepath: Union[str, Path] = "/etc/cobbler/version"
) -> CobblerVersion:
"""
Retrieve the current Cobbler version. Normally it can be read from /etc/cobbler/version
:param filepath: The filepath of the version file, defaults to "/etc/cobbler/version"
"""
# The format of the version file is the following:
# $ cat /etc/cobbler/version
# [cobbler]
# gitdate = Fri Aug 13 13:52:40 2021 +0200
# gitstamp = 610d30d1
# builddate = Mon Aug 16 07:22:38 2021
# version = 3.2.1
# version_tuple = [3, 2, 1]
config = configparser.ConfigParser()
if not config.read(filepath):
# filepath does not exists
return EMPTY_VERSION
version = config["cobbler"]["version"].split(".")
return CobblerVersion(int(version[0]), int(version[1]), int(version[2]))
def get_schema(version: CobblerVersion) -> Schema:
"""
Returns a schema to a given Cobbler version
:param version: The Cobbler version object
:return: The schema of the Cobbler version
"""
return VERSION_LIST[version].schema
def discover_migrations(path: str = migrations_path) -> None:
"""
Discovers the migration module for each Cobbler version and loads it if it is valid according to certain conditions:
* the module must contain the following methods: validate(), normalize(), migrate()
* those version must have a certain signature
:param path: The path of the migration modules, defaults to migrations_path
"""
filenames = glob.glob(f"{path}/V[0-9]*_[0-9]*_[0-9]*.py")
for files in filenames:
basename = files.replace(path, "")
migration_name = ""
if basename.startswith("/"):
basename = basename[1:]
if basename.endswith(".py"):
migration_name = basename[:-3]
# migration_name should now be something like V3_0_0
# Remove leading V. Necessary to save values into CobblerVersion object
version = migration_name[1:].split("_")
__load_migration_modules(migration_name, version)
def auto_migrate(
yaml_dict: Dict[str, Any],
settings_path: Path,
ignore_keys: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""
Auto migration to the most recent version.
:param yaml_dict: The settings dict to migrate.
:param ignore_keys: The list of ignore keys to exclude from auto migration.
:param settings_path: The path of the settings dict.
:return: The migrated dict.
"""
if not ignore_keys:
ignore_keys = []
if not yaml_dict.get("auto_migrate_settings", True):
raise RuntimeError(
"Settings automigration disabled but required for starting the daemon!"
)
settings_version = get_settings_file_version(yaml_dict, ignore_keys)
if settings_version == EMPTY_VERSION:
raise RuntimeError("Automigration not possible due to undiscoverable settings!")
sorted_version_list = sorted(list(VERSION_LIST.keys()))
migrations = sorted_version_list[sorted_version_list.index(settings_version) :]
for index in range(0, len(migrations) - 1):
if index == len(migrations) - 1:
break
yaml_dict = migrate(
yaml_dict,
settings_path,
migrations[index],
migrations[index + 1],
ignore_keys,
)
return yaml_dict
def migrate(
yaml_dict: Dict[str, Any],
settings_path: Path,
old: CobblerVersion = EMPTY_VERSION,
new: CobblerVersion = EMPTY_VERSION,
ignore_keys: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""
Migration to a specific version. If no old and new version is supplied it will call ``auto_migrate()``.
:param yaml_dict: The settings dict to migrate.
:param settings_path: The path of the settings dict.
:param old: The version to migrate from, defaults to EMPTY_VERSION.
:param new: The version to migrate to, defaults to EMPTY_VERSION.
:param ignore_keys: The list of settings ot be excluded from migration.
:raises ValueError: Raised if attempting to downgraade.
:return: The migrated dict.
"""
if not ignore_keys:
ignore_keys = []
# If no version supplied do auto migrations
if old == EMPTY_VERSION and new == EMPTY_VERSION:
return auto_migrate(yaml_dict, settings_path, ignore_keys)
if EMPTY_VERSION in (old, new):
raise ValueError(
"Either both or no versions must be specified for a migration!"
)
# Extra settings and ignored keys are excluded from validation
data, data_to_exclude_from_validation = filter_settings_to_validate(
yaml_dict, ignore_keys
)
if old == new:
data = VERSION_LIST[old].normalize(data)
# Put back settings excluded form validation
data.update(data_to_exclude_from_validation)
return data
# If both versions are present, check if old < new and then migrate the appropriate versions.
if old > new:
raise ValueError("Downgrades are not supported!")
sorted_version_list = sorted(list(VERSION_LIST.keys()))
migration_list = sorted_version_list[
sorted_version_list.index(old) + 1 : sorted_version_list.index(new) + 1
]
for key in migration_list:
data = VERSION_LIST[key].migrate(data)
# Put back settings excluded form validation
data.update(data_to_exclude_from_validation)
return data
def validate(
settings: Dict[str, Any],
settings_path: Path,
ignore_keys: Optional[List[str]] = None,
) -> bool:
"""
Wrapper function for the validate() methods of the individual migration modules.
:param settings: The settings dict to validate.
:param settings_path: TODO: not used at the moment
:param ignore_keys: The list of settings ot be excluded from validation.
:return: True if settings are valid, otherwise False.
"""
if not ignore_keys:
ignore_keys = []
version = get_installed_version()
# Extra settings and ignored keys are excluded from validation
data, _ = filter_settings_to_validate(settings, ignore_keys)
result: bool = VERSION_LIST[version].validate(data)
return result
def normalize(
settings: Dict[str, Any], ignore_keys: Optional[List[str]] = None
) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:param ignore_keys: The list of settings ot be excluded from normalization.
:return: The validated dict.
"""
if not ignore_keys:
ignore_keys = []
version = get_installed_version()
# Extra settings and ignored keys are excluded from validation
data, data_to_exclude_from_validation = filter_settings_to_validate(
settings, ignore_keys
)
result: Dict[str, Any] = VERSION_LIST[version].normalize(data)
# Put back settings excluded form validation
result.update(data_to_exclude_from_validation)
return result
discover_migrations()
| 13,939
|
Python
|
.py
| 337
| 34.881306
| 120
| 0.666198
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,137
|
V3_2_0.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_2_0.py
|
"""
Migration from V3.1.2 to V3.2.0
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import Optional, Or, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V3_1_2, helper
schema = Schema(
{
Optional("auto_migrate_settings", default=True): bool,
"allow_duplicate_hostnames": int,
"allow_duplicate_ips": int,
"allow_duplicate_macs": int,
"allow_dynamic_settings": int,
"always_write_dhcp_entries": int,
"anamon_enabled": int,
"authn_pam_service": str,
"auth_token_expiration": int,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
Optional("bind_manage_ipmi", default=0): int,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"build_reporting_enabled": int,
"build_reporting_email": list,
"build_reporting_ignorelist": [str],
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
"cache_enabled": int,
"cheetah_import_whitelist": list,
"client_use_https": int,
"client_use_localhost": int,
Optional("cobbler_master", default=""): str,
"createrepo_flags": str,
"default_autoinstall": str,
"default_name_servers": list,
"default_name_servers_search": list,
"default_ownership": list,
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": int,
"default_virt_ram": int,
"default_virt_type": str,
"enable_gpxe": int,
"enable_menu": int,
Optional("extra_settings_list", default=[]): [str],
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"http_port": int,
"include": list,
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
"kernel_options": dict,
"ldap_anonymous_bind": int,
"ldap_base_dn": str,
"ldap_port": int,
"ldap_search_bind_dn": str,
"ldap_search_passwd": str,
"ldap_search_prefix": str,
"ldap_server": str,
"ldap_tls_cacertfile": str,
"ldap_tls_certfile": str,
"ldap_tls_keyfile": str,
"ldap_tls": int,
"manage_dhcp": int,
"manage_dns": int,
"manage_forward_zones": list,
Optional("manage_genders", default=0): int,
"manage_reverse_zones": list,
"manage_rsync": int,
Optional("manage_tftp", default=1): int,
"manage_tftpd": int,
"mgmt_classes": list,
"mgmt_parameters": dict,
"next_server": str,
"nopxe_with_triggers": int,
Optional("nsupdate_enabled", default=0): int,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional(
"nsupdate_tsig_key",
default=[
"cobbler_update_key.",
"hvnK54HFJXFasHjzjEn09ASIkCOGYSnofRq4ejsiBHz3udVyGiuebFGAswSjKUxNuhmllPrkI0HRSSmM2qvZug==",
],
): list,
"power_management_default_type": str,
Optional("proxy_url_ext", default=""): Or(None, str), # type: ignore
"proxy_url_int": str,
"puppet_auto_setup": int,
"puppetca_path": str,
Optional("puppet_parameterized_classes", default=1): int,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"pxe_just_once": int,
"redhat_management_key": str,
"redhat_management_permissive": int,
"redhat_management_server": str,
"register_new_installs": int,
"remove_old_puppet_certs_automatically": int,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"reposync_rsync_flags": str,
"restart_dhcp": int,
"restart_dns": int,
"run_install_triggers": int,
"scm_push_script": str,
"scm_track_author": str,
"scm_track_enabled": int,
"scm_track_mode": str,
"serializer_pretty_json": int,
"server": str,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"sign_puppet_certs_automatically": int,
"tftpboot_location": str,
"virt_auto_boot": int,
"webdir": str,
"webdir_whitelist": list,
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": int,
"yumdownloader_flags": str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V3.2.0 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_1_2.validate(settings):
raise SchemaError("V3.1.2: Schema error while validating")
# add missing keys
# name - value pairs
missing_keys = {
"cache_enabled": 1,
"reposync_rsync_flags": "-rltDv --copy-unsafe-links",
}
for key, value in missing_keys.items():
new_setting = helper.Setting(key, value)
helper.key_add(new_setting, settings)
return normalize(settings)
| 6,799
|
Python
|
.py
| 176
| 30.670455
| 107
| 0.613836
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,138
|
V2_8_5.py
|
cobbler_cobbler/cobbler/settings/migrations/V2_8_5.py
|
"""
Migration from V2.x.x to V2.8.5
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import Optional, Schema, SchemaError # type: ignore
schema = Schema(
{
Optional("auto_migrate_settings", default=True): int,
"allow_duplicate_hostnames": int,
"allow_duplicate_ips": int,
"allow_duplicate_macs": int,
"allow_dynamic_settings": int,
"always_write_dhcp_entries": int,
"anamon_enabled": int,
"authn_pam_service": str,
"auth_token_expiration": int,
"bind_chroot_path": str,
"bind_manage_ipmi": int,
"bind_master": str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"build_reporting_enabled": int,
"build_reporting_email": list,
"build_reporting_ignorelist": list,
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
Optional("build_reporting_to_address", default=""): str,
"cheetah_import_whitelist": list,
"client_use_https": int,
"client_use_localhost": int,
Optional("cobbler_master", default=""): str,
"consoles": str,
"createrepo_flags": str,
Optional("default_deployment_method", default=""): str,
"default_kickstart": str,
"default_name_servers": list,
Optional("default_name_servers_search", default=[]): list,
"default_ownership": list,
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": int,
"default_virt_ram": int,
"default_virt_type": str,
"enable_gpxe": int,
"enable_menu": int,
"func_auto_setup": int,
"func_master": str,
"http_port": int,
Optional("isc_set_host_name", default=0): int,
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
Optional("kerberos_realm", default="EXAMPLE.COM"): str,
"kernel_options": dict,
"kernel_options_s390x": dict,
"ldap_anonymous_bind": int,
"ldap_base_dn": str,
Optional("ldap_management_default_type", default="authconfig"): str,
"ldap_port": int,
"ldap_search_bind_dn": str,
"ldap_search_passwd": str,
"ldap_search_prefix": str,
"ldap_server": str,
"ldap_tls": int,
"ldap_tls_cacertfile": str,
"ldap_tls_certfile": str,
"ldap_tls_keyfile": str,
"manage_dhcp": int,
"manage_dns": int,
"manage_forward_zones": list,
"manage_reverse_zones": list,
"manage_genders": int,
"manage_rsync": int,
Optional("manage_tftp", default=1): int,
"manage_tftpd": int,
"mgmt_classes": list,
"mgmt_parameters": dict,
"next_server": str,
"power_management_default_type": str,
"power_template_dir": str,
"proxy_url_ext": str,
"proxy_url_int": str,
"puppet_auto_setup": int,
"puppetca_path": str,
Optional("puppet_parameterized_classes", default=1): int,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"pxe_just_once": int,
"pxe_template_dir": str,
"redhat_management_permissive": int,
"redhat_management_server": str,
"redhat_management_key": str,
"redhat_management_type": str,
"register_new_installs": int,
"remove_old_puppet_certs_automatically": int,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"restart_dhcp": int,
"restart_dns": int,
"run_install_triggers": int,
"scm_track_enabled": int,
"scm_track_mode": str,
"serializer_pretty_json": int,
"server": str,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/2.8.x/latest.json",
): str,
"sign_puppet_certs_automatically": int,
"snippetsdir": str,
"template_remote_kickstarts": int,
"virt_auto_boot": int,
"webdir": str,
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": int,
"yumdownloader_flags": str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V2.8.5 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not validate(settings):
raise SchemaError("V2.8.5: Schema error while validating")
return normalize(settings)
| 5,911
|
Python
|
.py
| 156
| 30.089744
| 92
| 0.612156
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,139
|
V3_1_1.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_1_1.py
|
"""
Migration from V3.1.0 to V3.1.1
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import SchemaError # type: ignore
from cobbler.settings.migrations import V3_1_0
# schema identical to V3_1_0
schema = V3_1_0.schema
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V3.1.1 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_1_0.validate(settings):
raise SchemaError("V3.1.0: Schema error while validating")
return normalize(settings)
| 1,536
|
Python
|
.py
| 40
| 34.25
| 92
| 0.7139
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,140
|
V3_0_1.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_0_1.py
|
"""
Migration from V3.0.0 to V3.0.1
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict, List
from schema import SchemaError # type: ignore
from cobbler.settings.migrations import V3_0_0
# schema identical to V3_0_0
schema = V3_0_0.schema
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def __migrate_modules_conf() -> None:
modules_conf_path = "/etc/cobbler/modules.conf"
with open(modules_conf_path, "r", encoding="UTF-8") as modules_conf_file:
result: List[str] = []
replacements = {
"authn_": "authentication.",
"authz_": "authorization.",
"manage_": "managers.",
}
for line in modules_conf_file:
for to_replace, replacement in replacements.items():
idx = line.find(to_replace)
if idx == -1:
continue
result.append(
"%(head)s%(replacement)s%(tail)s"
% {
"head": line[:idx],
"replacement": replacement,
"tail": line[idx + len(to_replace) :],
}
)
break
else: # no break occured -> nothing to replace
result.append(line)
with open(modules_conf_path, "w", encoding="UTF-8") as modules_conf_file:
for line in result:
modules_conf_file.write(line)
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V3.0.1 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
__migrate_modules_conf()
if not V3_0_0.validate(settings):
raise SchemaError("V3.0.0: Schema error while validating")
return normalize(settings)
| 2,707
|
Python
|
.py
| 69
| 30.898551
| 92
| 0.613359
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,141
|
V3_1_2.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_1_2.py
|
"""
Migration from V3.1.1 to V3.1.2
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
from typing import Any, Dict
from schema import Optional, Or, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V3_1_1, helper
schema = Schema(
{
Optional("auto_migrate_settings", default=True): bool,
"allow_duplicate_hostnames": int,
"allow_duplicate_ips": int,
"allow_duplicate_macs": int,
"allow_dynamic_settings": int,
"always_write_dhcp_entries": int,
"anamon_enabled": int,
"authn_pam_service": str,
"auth_token_expiration": int,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
Optional("bind_manage_ipmi", default=0): int,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"build_reporting_enabled": int,
"build_reporting_email": list,
"build_reporting_ignorelist": [str],
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
"cheetah_import_whitelist": list,
"client_use_https": int,
"client_use_localhost": int,
Optional("cobbler_master", default=""): str,
"createrepo_flags": str,
"default_autoinstall": str,
"default_name_servers": list,
"default_name_servers_search": list,
"default_ownership": list,
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": int,
"default_virt_ram": int,
"default_virt_type": str,
"enable_gpxe": int,
"enable_menu": int,
Optional("extra_settings_list", default=[]): [str],
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"http_port": int,
"include": list,
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
"kernel_options": dict,
"ldap_anonymous_bind": int,
"ldap_base_dn": str,
"ldap_port": int,
"ldap_search_bind_dn": str,
"ldap_search_passwd": str,
"ldap_search_prefix": str,
"ldap_server": str,
"ldap_tls_cacertfile": str,
"ldap_tls_certfile": str,
"ldap_tls_keyfile": str,
"ldap_tls": int,
"manage_dhcp": int,
"manage_dns": int,
"manage_forward_zones": list,
Optional("manage_genders", default=0): int,
"manage_reverse_zones": list,
"manage_rsync": int,
Optional("manage_tftp", default=1): int,
"manage_tftpd": int,
"mgmt_classes": list,
"mgmt_parameters": dict,
"next_server": str,
"nopxe_with_triggers": int,
Optional("nsupdate_enabled", default=0): int,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional(
"nsupdate_tsig_key",
default=[
"cobbler_update_key.",
"hvnK54HFJXFasHjzjEn09ASIkCOGYSnofRq4ejsiBHz3udVyGiuebFGAswSjKUxNuhmllPrkI0HRSSmM2qvZug==",
],
): list,
"power_management_default_type": str,
Optional("proxy_url_ext", default=""): Or(None, str), # type: ignore
"proxy_url_int": str,
"puppet_auto_setup": int,
"puppetca_path": str,
Optional("puppet_parameterized_classes", default=1): int,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"pxe_just_once": int,
"redhat_management_key": str,
"redhat_management_permissive": int,
"redhat_management_server": str,
"register_new_installs": int,
"remove_old_puppet_certs_automatically": int,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"restart_dhcp": int,
"restart_dns": int,
"run_install_triggers": int,
"scm_push_script": str,
"scm_track_author": str,
"scm_track_enabled": int,
"scm_track_mode": str,
"serializer_pretty_json": int,
"server": str,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"sign_puppet_certs_automatically": int,
"tftpboot_location": str,
"virt_auto_boot": int,
"webdir": str,
"webdir_whitelist": list,
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": int,
"yumdownloader_flags": str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V3.1.2 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V3_1_1.validate(settings):
raise SchemaError("V3.1.1: Schema error while validating")
# delete removed key
helper.key_delete("power_template_dir", settings)
return normalize(settings)
| 6,507
|
Python
|
.py
| 167
| 30.988024
| 107
| 0.615847
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,142
|
helper.py
|
cobbler_cobbler/cobbler/settings/migrations/helper.py
|
"""
Helper module which contains shared logic for adjusting the settings.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
import datetime
import os
from shutil import copytree
from typing import Any, Dict, List, Union
class Setting:
"""
Specifies a setting object
"""
def __init__(self, location: Union[str, List[str]], value: Any):
"""
Conutructor
"""
if isinstance(location, str):
self.location = self.split_str_location(location)
elif isinstance(location, list): # type: ignore
self.location = location
else:
raise TypeError("location must be of type str or list.")
self.value = value
def __eq__(self, other: Any) -> bool:
"""
Compares 2 Setting objects for equality. Necesarry for the tests.
"""
if not isinstance(other, Setting):
return False
return self.value == other.value and self.location == other.location
@property
def key_name(self) -> str:
"""
Returns the location.
"""
return self.location[-1]
@staticmethod
def split_str_location(location: str) -> List[str]:
"""
Split the given location at "."
Necessary for nesting in our settings file
:param location: Can be "manage.dhcp_v4" or "restart.dhcp_v4" for example.
"""
return location.split(".")
# Some algorithms taken from https://stackoverflow.com/a/14692746/4730773
def key_add(new: Setting, settings: Dict[str, Any]) -> None:
"""
Add a new settings key.
:param new: The new setting to add.
:param settings: [description]
"""
nested = new.location
for key in nested[:-1]:
if settings.get(key) is None:
settings[key] = {}
settings = settings[key]
settings[nested[-1]] = new.value
def key_delete(delete: str, settings: Dict[str, Any]) -> None:
"""
Deletes a given setting
:param delete: The name of the setting to be deleted.
:param settings: The settings dict where the key should be deleted.
"""
delete_obj = Setting(delete, None)
if len(delete_obj.location) == 1:
del settings[delete_obj.key_name]
else:
del key_get(".".join(delete_obj.location[:-1]), settings).value[
delete_obj.key_name
]
def key_get(key: str, settings: Dict[str, Any]) -> Setting:
"""
Get a key from the settings
:param key: The key to get in the form "a.b.c"
:param settings: The dict to operate on.
:return: The desired key from the settings dict
"""
# TODO: Check if key does not exist
if not key:
raise ValueError("Key must not be empty!")
new = Setting(key, None)
nested = new.location
for keys in nested[:-1]:
settings = settings[keys]
new.value = settings[nested[-1]]
return new
def key_move(move: Setting, new_location: List[str], settings: Dict[str, Any]) -> None:
"""
Delete the old setting and create a new key at ``new_location``
:param move: The name of the old key which should be moved.
:param new_location: The location of the new key
:param settings: The dict to operate on.
"""
new_setting = Setting(new_location, move.value)
key_delete(".".join(move.location), settings)
key_add(new_setting, settings)
def key_rename(old_name: Setting, new_name: str, settings: Dict[str, Any]) -> None:
"""
Wrapper for key_move()
:param old_name: The old name
:param new_name: The new name
:param settings:
"""
new_location = old_name.location[:-1] + [new_name]
key_move(old_name, new_location, settings)
def key_set_value(new: Setting, settings: Dict[str, Any]) -> None:
"""
Change the value of a setting.
:param new: A Settings object with the new information.
:param settings: The settings dict.
"""
nested = new.location
for key in nested[:-1]:
settings = settings[key]
settings[nested[-1]] = new.value
def key_drop_if_default(
settings: Dict[str, Any], defaults: Dict[str, Any]
) -> Dict[str, Any]:
"""
Drop all keys which values are identical to the default ones.
:param settings: The current settings read from an external source
:param defaults: The full settings with default values
"""
for key in list(settings.keys()):
if isinstance(settings[key], dict):
settings[key] = key_drop_if_default(settings[key], defaults[key])
if len(settings[key]) == 0:
del settings[key]
else:
if settings[key] == defaults[key]:
settings.pop(key)
return settings
def backup_dir(dir_path: str) -> None:
"""
Copies the directory tree and adds a suffix ".backup.XXXXXXXXX" to it.
:param dir_path: The full path to the directory which should be backed up.
:raises FileNotFoundError: In case the path specified was not existing.
"""
src = os.path.normpath(dir_path)
now_iso = datetime.datetime.now().isoformat()
copytree(dir_path, f"{src}.backup.{now_iso}")
| 5,305
|
Python
|
.py
| 144
| 30.645833
| 87
| 0.646049
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,143
|
V3_0_0.py
|
cobbler_cobbler/cobbler/settings/migrations/V3_0_0.py
|
"""
Migration from V2.8.5 to V3.0.0
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: 2021 Dominik Gedon <dgedon@suse.de>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
import glob
import json
import os
import shutil
from typing import Any, Dict, List
from schema import Optional, Or, Schema, SchemaError # type: ignore
from cobbler.settings.migrations import V2_8_5, helper
schema = Schema(
{
Optional("auto_migrate_settings", default=True): bool,
"allow_duplicate_hostnames": int,
"allow_duplicate_ips": int,
"allow_duplicate_macs": int,
"allow_dynamic_settings": int,
"always_write_dhcp_entries": int,
"anamon_enabled": int,
"authn_pam_service": str,
"auth_token_expiration": int,
"autoinstall_snippets_dir": str,
"autoinstall_templates_dir": str,
"bind_chroot_path": str,
Optional("bind_manage_ipmi", default=0): int,
"bind_master": str,
"boot_loader_conf_template_dir": str,
Optional("bootloaders_dir", default="/var/lib/cobbler/loaders"): str,
Optional("buildisodir", default="/var/cache/cobbler/buildiso"): str,
"build_reporting_enabled": int,
"build_reporting_email": list,
# in yaml this is a list but should be a str
"build_reporting_ignorelist": list,
"build_reporting_sender": str,
"build_reporting_smtp_server": str,
"build_reporting_subject": str,
"cheetah_import_whitelist": list,
"client_use_https": int,
"client_use_localhost": int,
Optional("cobbler_master", default=""): str,
"createrepo_flags": str,
"default_autoinstall": str,
"default_name_servers": list,
"default_name_servers_search": list,
"default_ownership": list,
"default_password_crypted": str,
"default_template_type": str,
"default_virt_bridge": str,
Optional("default_virt_disk_driver", default="raw"): str,
"default_virt_file_size": int,
"default_virt_ram": int,
"default_virt_type": str,
"enable_gpxe": int,
"enable_menu": int,
Optional("grubconfig_dir", default="/var/lib/cobbler/grub_config"): str,
"http_port": int,
"include": list,
Optional("iso_template_dir", default="/etc/cobbler/iso"): str,
"kernel_options": dict,
"ldap_anonymous_bind": int,
"ldap_base_dn": str,
"ldap_port": int,
"ldap_search_bind_dn": str,
"ldap_search_passwd": str,
"ldap_search_prefix": str,
"ldap_server": str,
"ldap_tls_cacertfile": str,
"ldap_tls_certfile": str,
"ldap_tls_keyfile": str,
# in yaml this is an int but should be a str
"ldap_tls": int,
"manage_dhcp": int,
"manage_dns": int,
"manage_forward_zones": list,
Optional("manage_genders", default=0): int,
"manage_reverse_zones": list,
"manage_rsync": int,
Optional("manage_tftp", default=1): int,
"manage_tftpd": int,
"mgmt_classes": list,
"mgmt_parameters": dict,
"next_server": str,
"nopxe_with_triggers": int,
Optional("nsupdate_enabled", default=0): int,
Optional("nsupdate_log", default="/var/log/cobbler/nsupdate.log"): str,
Optional("nsupdate_tsig_algorithm", default="hmac-sha512"): str,
Optional(
"nsupdate_tsig_key",
default=[
"cobbler_update_key.",
"hvnK54HFJXFasHjzjEn09ASIkCOGYSnofRq4ejsiBHz3udVyGiuebFGAswSjKUxNuhmllPrkI0HRSSmM2qvZug==",
],
): list,
"power_management_default_type": str,
"power_template_dir": str,
Optional("proxy_url_ext", default=""): Or(None, str), # type: ignore
"proxy_url_int": str,
"puppet_auto_setup": int,
"puppetca_path": str,
Optional("puppet_parameterized_classes", default=1): int,
Optional("puppet_server", default="puppet"): str,
Optional("puppet_version", default=2): int,
"pxe_just_once": int,
"redhat_management_key": str,
"redhat_management_permissive": int,
"redhat_management_server": str,
"register_new_installs": int,
"remove_old_puppet_certs_automatically": int,
"replicate_repo_rsync_options": str,
"replicate_rsync_options": str,
"reposync_flags": str,
"restart_dhcp": int,
"restart_dns": int,
"run_install_triggers": int,
"scm_push_script": str,
"scm_track_author": str,
"scm_track_enabled": int,
"scm_track_mode": str,
"serializer_pretty_json": int,
"server": str,
Optional(
"signature_path", default="/var/lib/cobbler/distro_signatures.json"
): str,
Optional(
"signature_url",
default="https://cobbler.github.io/signatures/3.0.x/latest.json",
): str,
"sign_puppet_certs_automatically": int,
"tftpboot_location": str,
"virt_auto_boot": int,
"webdir": str,
"webdir_whitelist": list,
"xmlrpc_port": int,
"yum_distro_priority": int,
"yum_post_install_mirror": int,
"yumdownloader_flags": str,
}, # type: ignore
ignore_extra_keys=False,
)
def validate(settings: Dict[str, Any]) -> bool:
"""
Checks that a given settings dict is valid according to the reference schema ``schema``.
:param settings: The settings dict to validate.
:return: True if valid settings dict otherwise False.
"""
try:
schema.validate(settings) # type: ignore
except SchemaError:
return False
return True
def normalize(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
If data in ``settings`` is valid the validated data is returned.
:param settings: The settings dict to validate.
:return: The validated dict.
"""
# We are aware of our schema and thus can safely ignore this.
return schema.validate(settings) # type: ignore
def migrate(settings: Dict[str, Any]) -> Dict[str, Any]:
"""
Migration of the settings ``settings`` to the V3.0.0 settings
:param settings: The settings dict to migrate
:return: The migrated dict
"""
if not V2_8_5.validate(settings):
raise SchemaError("V2.8.5: Schema error while validating")
# rename keys and update their value
old_setting = helper.Setting(
"default_kickstart", "/var/lib/cobbler/kickstarts/default.ks"
)
new_setting = helper.Setting(
"default_autoinstall", "/var/lib/cobbler/autoinstall_templates/default.ks"
)
helper.key_rename(old_setting, "default_autoinstall", settings)
helper.key_set_value(new_setting, settings)
old_setting = helper.Setting("snippetsdir", "/var/lib/cobbler/snippets")
new_setting = helper.Setting(
"autoinstall_snippets_dir", "/var/lib/cobbler/snippets"
)
helper.key_rename(old_setting, "autoinstall_snippets_dir", settings)
helper.key_set_value(new_setting, settings)
# add missing keys
# name - value pairs
missing_keys = {
"autoinstall_templates_dir": "/var/lib/cobbler/templates",
"boot_loader_conf_template_dir": "/etc/cobbler/boot_loader_conf",
"default_name_servers_search": [],
"include": ["/etc/cobbler/settings.d/*.settings"],
"nopxe_with_triggers": 1,
"scm_push_script": "/bin/true",
"scm_track_author": "cobbler <cobbler@localhost>",
"tftpboot_location": "/srv/tftpboot",
"webdir_whitelist": [],
}
for key, value in missing_keys.items():
new_setting = helper.Setting(key, value)
helper.key_add(new_setting, settings)
# delete removed keys
deleted_keys = [
"consoles",
"func_auto_setup",
"func_master",
"kernel_options_s390x",
"pxe_template_dir",
"redhat_management_type",
"template_remote_kickstarts",
]
for key in deleted_keys:
helper.key_delete(key, settings)
# START: migrate-data-v2-to-v3
def serialize_item(collection: str, item: Dict[str, Any]) -> None:
"""
Save a collection item to file system
:param collection: name
:param item: dictionary
"""
filename = f"/var/lib/cobbler/collections/{collection}/{item['name']}"
if settings.get("serializer_pretty_json", False):
sort_keys = True
indent = 4
else:
sort_keys = False
indent = None
filename += ".json"
with open(filename, "w", encoding="UTF-8") as item_fd:
data = json.dumps(item, sort_keys=sort_keys, indent=indent)
item_fd.write(data)
def deserialize_raw_old(collection_types: str) -> List[Dict[str, Any]]:
results = []
all_files = glob.glob(f"/var/lib/cobbler/config/{collection_types}/*")
for file in all_files:
with open(file, encoding="UTF-8") as item_fd:
json_data = item_fd.read()
_dict = json.loads(json_data)
results.append(_dict) # type: ignore
return results # type: ignore
def substitute_paths(value: Any) -> Any:
if isinstance(value, list):
value = [substitute_paths(x) for x in value] # type: ignore
elif isinstance(value, str):
value = value.replace("/ks_mirror/", "/distro_mirror/")
return value
def transform_key(key: str, value: Any) -> Any:
if key in transform:
ret_value = transform[key](value)
else:
ret_value = value
return substitute_paths(ret_value)
# Keys to add to various collections
add = {
"distros": {
"boot_loader": "grub",
},
"profiles": {
"next_server": "<<inherit>>",
},
"systems": {
"boot_loader": "<<inherit>>",
"next_server": "<<inherit>>",
"power_identity_file": "",
"power_options": "",
"serial_baud_rate": "",
"serial_device": "",
},
}
# Keys to remove
remove = [
"ldap_enabled",
"ldap_type",
"monit_enabled",
"redhat_management_server",
"template_remote_kickstarts",
]
# Keys to rename
rename = {
"kickstart": "autoinstall",
"ks_meta": "autoinstall_meta",
}
# Keys to transform - use new key name if renamed
transform = {
"autoinstall": os.path.basename,
}
# Convert the old collections to new collections
for old_type in [
"distros.d",
"files.d",
"images.d",
"mgmtclasses.d",
"packages.d",
"profiles.d",
"repos.d",
"systems.d",
]:
new_type = old_type[:-2]
# Load old files
old_collection = deserialize_raw_old(old_type)
print(f"Processing {old_type}:")
for old_item in old_collection:
print(f" Processing {old_item['name']}")
new_item: Dict[str, Any] = {}
for key in old_item:
if key in remove:
continue
if key in rename:
new_item[rename[key]] = transform_key(rename[key], old_item[key])
continue
new_item[key] = transform_key(key, old_item[key])
if new_type in add:
# We only add items if they don't exist
for item in add[new_type]:
if item not in new_item:
new_item[item] = add[new_type][item]
serialize_item(new_type, new_item)
path_rename = [
("/var/lib/cobbler/kickstarts", "/var/lib/cobbler/templates"),
("/var/www/cobbler/ks_mirror", "/var/www/cobbler/distro_mirror"),
]
# Copy paths
for old_path, new_path in path_rename:
if os.path.isdir(old_path):
shutil.copytree(old_path, new_path)
os.rename(old_path, new_path)
# END: migrate-data-v2-to-v3
if not validate(settings):
raise SchemaError("V3.0.0: Schema error while validating")
return normalize(settings)
| 12,409
|
Python
|
.py
| 330
| 28.99697
| 107
| 0.592974
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,144
|
nsupdate_delete_system_pre.py
|
cobbler_cobbler/cobbler/modules/nsupdate_delete_system_pre.py
|
"""
Replace (or remove) records in DNS zone for systems created (or removed) by Cobbler
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Adrian Brzezinski <adrbxx@gmail.com>
# DNS toolkit for Python
# - python-dnspython (Debian)
# - python-dns (RH/CentOS)
import time
from typing import IO, TYPE_CHECKING, Any, List, Optional
import dns.query
import dns.resolver
import dns.tsigkeyring
import dns.update
from cobbler.cexceptions import CX
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
LOGF: Optional[IO[str]] = None
def nslog(msg: str) -> None:
"""
Log a message to the logger.
:param msg: The message to log.
"""
if LOGF is not None:
LOGF.write(msg)
def register() -> str:
"""
This method is the obligatory Cobbler registration hook.
:return: The trigger name or an empty string.
"""
if __name__ == "cobbler.modules.nsupdate_add_system_post":
return "/var/lib/cobbler/triggers/add/system/post/*"
if __name__ == "cobbler.modules.nsupdate_delete_system_pre":
return "/var/lib/cobbler/triggers/delete/system/pre/*"
return ""
def run(api: "CobblerAPI", args: List[Any]):
"""
This method executes the trigger, meaning in this case that it updates the dns configuration.
:param api: The api to read metadata from.
:param args: Metadata to log.
:return: "0" on success or a skipped task. If the task failed or problems occurred then an exception is raised.
"""
# Module level log file descriptor
global LOGF # pylint: disable=global-statement
action = None
if __name__ == "cobbler.modules.nsupdate_add_system_post":
action = "replace"
elif __name__ == "cobbler.modules.nsupdate_delete_system_pre":
action = "delete"
else:
return 0
settings = api.settings()
if not settings.nsupdate_enabled:
return 0
# Read our settings
if str(settings.nsupdate_log) is not None: # type: ignore[reportUnnecessaryComparison]
LOGF = open(str(settings.nsupdate_log), "a", encoding="UTF-8") # type: ignore
nslog(f">> starting {__name__} {args}\n")
if str(settings.nsupdate_tsig_key) is not None: # type: ignore[reportUnnecessaryComparison]
keyring = dns.tsigkeyring.from_text(
{str(settings.nsupdate_tsig_key[0]): str(settings.nsupdate_tsig_key[1])}
)
else:
keyring = None
if str(settings.nsupdate_tsig_algorithm) is not None: # type: ignore[reportUnnecessaryComparison]
keyring_algo = str(settings.nsupdate_tsig_algorithm)
else:
keyring_algo = "HMAC-MD5.SIG-ALG.REG.INT"
# nslog( " algo %s, key %s : %s \n" % (keyring_algo, str(settings.nsupdate_tsig_key[0]),
# str(settings.nsupdate_tsig_key[1])) )
# get information about this system
system = api.find_system(args[0])
if system is None or isinstance(system, list):
raise ValueError("Search result was ambiguous!")
# process all interfaces and perform dynamic update for those with --dns-name
for name, interface in system.interfaces.items():
host = interface.dns_name
host_ip = interface.ip_address
if not system.is_management_supported(cidr_ok=False):
continue
if not host:
continue
if host.find(".") == -1:
continue
domain = ".".join(host.split(".")[1:]) # get domain from host name
host = host.split(".")[0] # strip domain
nslog(f"processing interface {name} : {interface}\n")
nslog(f"lookup for '{domain}' domain master nameserver... ")
# get master nameserver ip address
answers = dns.resolver.query(domain + ".", dns.rdatatype.SOA) # type: ignore
soa_mname = answers[0].mname # type: ignore
soa_mname_ip = None
for rrset in answers.response.additional: # type: ignore
if rrset.name == soa_mname: # type: ignore
soa_mname_ip = str(rrset.items[0].address) # type: ignore
if soa_mname_ip is None:
ip_address = dns.resolver.query(soa_mname, "A") # type: ignore
for answer in ip_address: # type: ignore
soa_mname_ip = answer.to_text() # type: ignore
nslog(f"{soa_mname} [{soa_mname_ip}]\n")
nslog(f"{action} dns record for {host}.{domain} [{host_ip}] .. ")
# try to update zone with new record
update = dns.update.Update(
domain + ".", keyring=keyring, keyalgorithm=keyring_algo # type: ignore
)
if action == "replace":
update.replace(host, 3600, dns.rdatatype.A, host_ip) # type: ignore
update.replace( # type: ignore
host,
3600,
dns.rdatatype.TXT, # type: ignore
f'"cobbler (date: {time.strftime("%c")})"',
)
else:
update.delete(host, dns.rdatatype.A, host_ip) # type: ignore
update.delete(host, dns.rdatatype.TXT) # type: ignore
try:
response = dns.query.tcp(update, soa_mname_ip) # type: ignore
rcode_txt = dns.rcode.to_text(response.rcode()) # type: ignore
except dns.tsig.PeerBadKey as error: # type: ignore
nslog("failed (refused key)\n>> done\n")
LOGF.close() # type: ignore
raise CX(
f"nsupdate failed, server '{soa_mname}' refusing our key"
) from error
nslog(f"response code: {rcode_txt}\n")
# notice user about update failure
if response.rcode() != dns.rcode.NOERROR: # type: ignore
nslog(">> done\n")
LOGF.close() # type: ignore
raise CX(
f"nsupdate failed (response: {rcode_txt}, name: {host}.{domain}, ip {host_ip}, name server {soa_mname})"
)
nslog(">> done\n")
LOGF.close() # type: ignore
return 0
| 5,996
|
Python
|
.py
| 136
| 36.080882
| 120
| 0.619522
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,145
|
sync_post_restart_services.py
|
cobbler_cobbler/cobbler/modules/sync_post_restart_services.py
|
"""
Restarts the DHCP and/or DNS after a Cobbler sync to apply changes to the configuration files.
"""
import logging
from typing import TYPE_CHECKING, List
from cobbler import utils
from cobbler.utils import process_management
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
logger = logging.getLogger()
def register() -> str:
"""
This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
indicates the trigger type
:return: Always ``/var/lib/cobbler/triggers/sync/post/*``
"""
return "/var/lib/cobbler/triggers/sync/post/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
Run the trigger via this method, meaning in this case that depending on the settings dns and/or dhcp services are
restarted.
:param api: The api to resolve settings.
:param args: This parameter is not used currently.
:return: The return code of the service restarts.
"""
settings = api.settings()
which_dhcp_module = api.get_module_name_from_file("dhcp", "module")
which_dns_module = api.get_module_name_from_file("dns", "module")
# special handling as we don't want to restart it twice
has_restarted_dnsmasq = False
ret_code = 0
if settings.manage_dhcp and settings.restart_dhcp:
if which_dhcp_module in ("managers.isc", "managers.dnsmasq"):
dhcp_module = api.get_module_from_file("dhcp", "module")
ret_code = dhcp_module.get_manager(api).restart_service()
if which_dhcp_module == "managers.dnsmasq":
has_restarted_dnsmasq = True
else:
logger.error("unknown DHCP engine: %s", which_dhcp_module)
ret_code = 411
if settings.manage_dns and settings.restart_dns:
if which_dns_module == "managers.bind":
named_service_name = utils.named_service_name()
ret_code = process_management.service_restart(named_service_name)
elif which_dns_module == "managers.dnsmasq" and not has_restarted_dnsmasq:
ret_code = process_management.service_restart("dnsmasq")
elif which_dns_module == "managers.dnsmasq" and has_restarted_dnsmasq:
ret_code = 0
elif which_dns_module == "managers.ndjbdns":
# N-DJBDNS picks up configuration changes automatically and does not need to be restarted.
pass
else:
logger.error("unknown DNS engine: %s", which_dns_module)
ret_code = 412
return ret_code
| 2,544
|
Python
|
.py
| 55
| 39.145455
| 117
| 0.677315
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,146
|
__init__.py
|
cobbler_cobbler/cobbler/modules/__init__.py
|
"""
This part of Cobbler may be utilized by any plugins which are extending Cobbler and core code which can be exchanged
through the ``modules.conf`` file.
A Cobbler module is loaded if it has a method called ``register()``. The method must return a ``str`` which represents
the module category.
"""
| 301
|
Python
|
.py
| 6
| 49
| 118
| 0.768707
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,147
|
sync_post_wingen.py
|
cobbler_cobbler/cobbler/modules/sync_post_wingen.py
|
"""
Create Windows boot files
To create Windows boot files, files are used that must be extracted from the distro. The ``cobbler import``"
command extracts the required files and places them where the given trigger expects them to be found.
To create boot files per profile/system, the trigger uses the following metadata from ``--autoinstall-meta``:
* ``kernel`` - the name of the bootstrap file for profile/system, can be:
* any filename, in the case of PXE boot without using ``wimboot`` which is not the same as the filename
for other profiles/systems of that distro. The trigger creates it from a copy of ``pxeboot.n12``
by replacing the ``bootmgr.exe`` string in the binary copy with the ``bootmgr`` metadata value.
In the case of Windows XP/2003, it replaces the ``NTLDR`` string.
* in case of PXE boot using ``wimboot``, specify the path to ``wimboot`` in the file system,
e.g ``/var/lib/tftpboot/wimboot``
* in case of iPXE boot using ``wimboot``, specify the path to ``wimboot`` in the file system or any
url that supports iPXE, e.g ``http://@@http_server@@/cobbler/images/@@distro_name@@/wimboot``
* ``bootmgr`` - filename of the Boot Manager for the profile/system. The trigger creates it by copying
``bootmgr.exe`` and replacing the ``BCD`` string in the binary copy with the string specified in the
``bcd`` metadata parameter. The filename must be exactly 11 characters long, e.g. ``bootmg1.exe``,
``bootmg2.exe, ..`` and not match the names for other profiles/systems of the same distro.
For Windows XP/2003, ``setupldr.exe`` is used as the Boot Manager and the string ``winnt.sif`` is
replaced in its copy.
* ``bcd`` - The name of the Windows Boot Configuration Data (BCD) file for the profile/system.
Must be exactly 3 characters and not the same as names for other profiles/systems on the same
distro, e.g. ``000``, ``001``, etc.
* ``winpe`` - The name of the Windows PE image file for the profile/system. The trigger copies it
from the distro and replaces the ``/Windows/System32/startnet.cmd`` file in it with the one
created from the ``startnet.template`` template. Filenames must be unique per the distro.
* ``answerfile`` - the name of the answer file for the Windows installation, e.g. ``autounattend01.xml``
or`` win01.sif`` for Windows XP/2003. The trigger creates the answerfile from the ``answerfile.template``.
Filenames must be unique per the distro.
* ``post_install_script`` - The name of the post-installation script file that will be run after Windows is
installed. To run a script, its filename is substituted into the answerfile template. Any valid Windows
commands can be used in the script, but its usual purpose is to download and run the script for the profile
from ``http://@@http_server@@/cblr/svc/op/autoinstall/profile/@@profile_name@@``, for this the script is
passed profile name as parameter . The post-installation script is created by a trigger from the
``post_inst_cmd.template`` template in the ``sources/$OEM$/$1`` distro directory only if it exists.
The Windows Installer copies the contents of this directory to the target host during installation.
* any other key/value pairs that can be used in ``startnet.template``, ``answerfile.template``,
``post_inst_cmd.template`` templates
"""
import binascii
import logging
import os
import re
import tempfile
from typing import TYPE_CHECKING, Any, Dict, Optional
from cobbler import templar, tftpgen, utils
from cobbler.utils import filesystem_helpers
try:
# pylint: disable=import-error
import hivex # type: ignore
from hivex.hive_types import REG_BINARY # type: ignore
from hivex.hive_types import REG_DWORD # type: ignore
from hivex.hive_types import REG_MULTI_SZ # type: ignore
from hivex.hive_types import REG_SZ # type: ignore
HAS_HIVEX = True
except Exception:
# This is only defined once in each case.
REG_BINARY = None # type: ignore[reportConstantRedefinition]
REG_DWORD = None # type: ignore[reportConstantRedefinition]
REG_MULTI_SZ = None # type: ignore[reportConstantRedefinition]
REG_SZ = None # type: ignore[reportConstantRedefinition]
HAS_HIVEX = False # type: ignore[reportConstantRedefinition]
try:
# pylint: disable=import-error
import pefile # type: ignore
HAS_PEFILE = True
except Exception:
# This is only defined once in each case.
HAS_PEFILE = False # type: ignore
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
ANSWERFILE_TEMPLATE_NAME = "answerfile.template"
POST_INST_CMD_TEMPLATE_NAME = "post_inst_cmd.template"
STARTNET_TEMPLATE_NAME = "startnet.template"
WIMUPDATE = "/usr/bin/wimupdate"
logger = logging.getLogger()
def register() -> Optional[str]:
"""
This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
indicates the trigger type
:return: Always ``/var/lib/cobbler/triggers/sync/post/*``
"""
if not HAS_HIVEX:
logging.info(
"python3-hivex not found. If you need Automatic Windows Installation support, please install."
)
return None
if not HAS_PEFILE:
logging.info(
"python3-pefile not found. If you need Automatic Windows Installation support, please install."
)
return None
return "/var/lib/cobbler/triggers/sync/post/*"
def bcdedit(
orig_bcd: str, new_bcd: str, wim: str, sdi: str, startoptions: Optional[str] = None
):
"""
Create new Windows Boot Configuration Data (BCD) based on Microsoft BCD extracted from a WIM image.
:param orig_bcd: Path to the original BCD
:param new_bcd: Path to the new customized BCD
:param wim: Path to the WIM image
:param sdi: Path to the System Deployment Image (SDI)
:param startoptions: Other BCD options
:return:
"""
def winpath_length(wp: str, add: int):
wpl = add + 2 * len(wp)
return wpl.to_bytes((wpl.bit_length() + 7) // 8, "big")
def guid2binary(g: str):
guid = (
g[7]
+ g[8]
+ g[5]
+ g[6]
+ g[3]
+ g[4]
+ g[1]
+ g[2]
+ g[12]
+ g[13]
+ g[10]
+ g[11]
+ g[17]
+ g[18]
)
guid += (
g[15]
+ g[16]
+ g[20]
+ g[21]
+ g[22]
+ g[23]
+ g[25]
+ g[26]
+ g[27]
+ g[28]
+ g[29]
+ g[30]
+ g[31]
)
guid += g[32] + g[33] + g[34] + g[35] + g[36]
return binascii.unhexlify(guid)
wim = wim.replace("/", "\\")
sdi = sdi.replace("/", "\\")
h = hivex.Hivex(orig_bcd, write=True) # type: ignore
root = h.root() # type: ignore
objs = h.node_get_child(root, "Objects") # type: ignore
for n in h.node_children(objs): # type: ignore
h.node_delete_child(n) # type: ignore
b = h.node_add_child(objs, "{9dea862c-5cdd-4e70-acc1-f32b344d4795}") # type: ignore
d = h.node_add_child(b, "Description") # type: ignore
h.node_set_value(d, {"key": "Type", "t": REG_DWORD, "value": b"\x02\x00\x10\x10"}) # type: ignore
e = h.node_add_child(b, "Elements") # type: ignore
e1 = h.node_add_child(e, "25000004") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_BINARY,
"value": b"\x00\x00\x00\x00\x00\x00\x00\x00",
},
)
e1 = h.node_add_child(e, "12000004") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_SZ,
"value": "Windows Boot Manager\0".encode(encoding="utf-16le"),
},
)
e1 = h.node_add_child(e, "24000001") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_MULTI_SZ,
"value": "{65c31250-afa2-11df-8045-000c29f37d88}\0\0".encode(
encoding="utf-16le"
),
},
)
e1 = h.node_add_child(e, "16000048") # type: ignore
h.node_set_value(e1, {"key": "Element", "t": REG_BINARY, "value": b"\x01"}) # type: ignore
b = h.node_add_child(objs, "{65c31250-afa2-11df-8045-000c29f37d88}") # type: ignore
d = h.node_add_child(b, "Description") # type: ignore
h.node_set_value(d, {"key": "Type", "t": REG_DWORD, "value": b"\x03\x00\x20\x10"}) # type: ignore
e = h.node_add_child(b, "Elements") # type: ignore
e1 = h.node_add_child(e, "12000002") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_SZ,
"value": "\\windows\\system32\\winload.exe\0".encode(encoding="utf-16le"),
},
)
e1 = h.node_add_child(e, "12000004") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_SZ,
"value": "Windows PE\0".encode(encoding="utf-16le"),
},
)
e1 = h.node_add_child(e, "22000002") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_SZ,
"value": "\\Windows\0".encode(encoding="utf-16le"),
},
)
e1 = h.node_add_child(e, "26000010") # type: ignore
h.node_set_value(e1, {"key": "Element", "t": REG_BINARY, "value": b"\x01"}) # type: ignore
e1 = h.node_add_child(e, "26000022") # type: ignore
h.node_set_value(e1, {"key": "Element", "t": REG_BINARY, "value": b"\x01"}) # type: ignore
e1 = h.node_add_child(e, "11000001") # type: ignore
guid = guid2binary("{ae5534e0-a924-466c-b836-758539a3ee3a}")
wimval = { # type: ignore
"key": "Element",
"t": REG_BINARY,
"value": guid
+ b"\x00\x00\x00\x00\x01\x00\x00\x00"
+ winpath_length(wim, 126)
+ b"\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00"
+ winpath_length(wim, 86)
+ b"\x00\x00\x00\x05\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x48\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00"
+ wim.encode(encoding="utf_16_le")
+ b"\x00\x00",
}
h.node_set_value(e1, wimval) # type: ignore
e1 = h.node_add_child(e, "21000001") # type: ignore
h.node_set_value(e1, wimval) # type: ignore
if startoptions:
e1 = h.node_add_child(e, "12000030") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_SZ,
"value": startoptions.join("\0").encode(encoding="utf-16le"),
},
)
b = h.node_add_child(objs, "{ae5534e0-a924-466c-b836-758539a3ee3a}") # type: ignore
d = h.node_add_child(b, "Description") # type: ignore
h.node_set_value(d, {"key": "Type", "t": REG_DWORD, "value": b"\x00\x00\x00\x30"}) # type: ignore
e = h.node_add_child(b, "Elements") # type: ignore
e1 = h.node_add_child(e, "12000004") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_SZ,
"value": "Ramdisk Options\0".encode(encoding="utf-16le"),
},
)
e1 = h.node_add_child(e, "32000004") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_SZ,
"value": sdi.encode(encoding="utf-16le") + b"\x00\x00",
},
)
e1 = h.node_add_child(e, "31000003") # type: ignore
h.node_set_value( # type: ignore
e1,
{
"key": "Element",
"t": REG_BINARY,
"value": b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00"
b"\x00\x00\x00\x00\x48\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
b"\x00\x00\x00\x00\x00\x00\x00\x00",
},
)
h.commit(new_bcd) # type: ignore
def run(api: "CobblerAPI", args: Any):
"""
Runs the trigger, meaning in this case creates Windows boot files.
:param api: The api instance of the Cobbler server. Used to look up if windows_enabled is true.
:param args: The parameter is currently unused for this trigger.
:return: 0 on success, otherwise an exception is risen.
"""
settings = api.settings()
if not settings.windows_enabled:
return 0
if not HAS_HIVEX:
logger.info(
"python3-hivex not found. If you need Automatic Windows Installation support, "
"please install."
)
return 0
if not HAS_PEFILE:
logger.info(
"python3-pefile not found. If you need Automatic Windows Installation support, "
"please install."
)
return 0
profiles = api.profiles()
systems = api.systems()
templ = templar.Templar(api)
tgen = tftpgen.TFTPGen(api)
with open(
os.path.join(settings.windows_template_dir, POST_INST_CMD_TEMPLATE_NAME),
encoding="UTF-8",
) as template_win:
post_tmpl_data = template_win.read()
with open(
os.path.join(settings.windows_template_dir, ANSWERFILE_TEMPLATE_NAME),
encoding="UTF-8",
) as template_win:
tmpl_data = template_win.read()
with open(
os.path.join(settings.windows_template_dir, STARTNET_TEMPLATE_NAME),
encoding="UTF-8",
) as template_start:
tmplstart_data = template_start.read()
def gen_win_files(distro: "Distro", meta: Dict[str, Any]):
boot_path = os.path.join(settings.webdir, "links", distro.name, "boot")
distro_path = distro.find_distro_path()
distro_dir = wim_file_name = os.path.join(
settings.tftpboot_location, "images", distro.name
)
web_dir = os.path.join(settings.webdir, "images", distro.name)
is_winpe = "winpe" in meta and meta["winpe"] != ""
is_bcd = "bcd" in meta and meta["bcd"] != ""
kernel_name = distro.kernel
if "kernel" in meta:
kernel_name = meta["kernel"]
kernel_name = os.path.basename(kernel_name)
is_wimboot = "wimboot" in kernel_name
if is_wimboot and "kernel" in meta and "wimboot" not in distro.kernel:
tgen.copy_single_distro_file(
os.path.join(settings.tftpboot_location, kernel_name), distro_dir, False
)
tgen.copy_single_distro_file(
os.path.join(distro_dir, kernel_name), web_dir, True
)
if "post_install_script" in meta:
post_install_dir = distro_path
if distro.os_version not in ("xp", "2003"):
post_install_dir = os.path.join(post_install_dir, "sources")
post_install_dir = os.path.join(post_install_dir, "$OEM$", "$1")
if not os.path.exists(post_install_dir):
filesystem_helpers.mkdir(post_install_dir)
data = templ.render(post_tmpl_data, meta, None)
post_install_script = os.path.join(
post_install_dir, meta["post_install_script"]
)
logger.info("Build post install script: %s", post_install_script)
with open(post_install_script, "w", encoding="UTF-8") as pi_file:
pi_file.write(data)
if "answerfile" in meta:
data = templ.render(tmpl_data, meta, None)
answerfile_name = os.path.join(distro_dir, meta["answerfile"])
logger.info("Build answer file: %s", answerfile_name)
with open(answerfile_name, "w", encoding="UTF-8") as answerfile:
answerfile.write(data)
tgen.copy_single_distro_file(answerfile_name, web_dir, False)
if "kernel" in meta and "bootmgr" in meta:
wk_file_name = os.path.join(distro_dir, kernel_name)
bootmgr = "bootmgr.exe"
if ".efi" in meta["bootmgr"]:
bootmgr = "bootmgr.efi"
wl_file_name = os.path.join(distro_dir, meta["bootmgr"])
tl_file_name = os.path.join(boot_path, bootmgr)
if distro.os_version in ("xp", "2003") and not is_winpe:
tl_file_name = os.path.join(boot_path, "setupldr.exe")
if len(meta["bootmgr"]) != 5:
logger.error("The loader name should be EXACTLY 5 character")
return 1
pat1 = re.compile(rb"NTLDR", re.IGNORECASE)
pat2 = re.compile(rb"winnt\.sif", re.IGNORECASE)
with open(tl_file_name, "rb") as file:
out = data = file.read()
if "answerfile" in meta:
if len(meta["answerfile"]) != 9:
logger.error(
"The response file name should be EXACTLY 9 character"
)
return 1
out = pat2.sub(bytes(meta["answerfile"], "utf-8"), data)
else:
if len(meta["bootmgr"]) != 11:
logger.error(
"The Boot manager file name should be EXACTLY 11 character"
)
return 1
bcd_name = "bcd"
if is_bcd:
bcd_name = meta["bcd"]
if len(bcd_name) != 3:
logger.error("The BCD file name should be EXACTLY 3 character")
return 1
if not os.path.isfile(tl_file_name):
logger.error("File not found: %s", tl_file_name)
return 1
pat1 = re.compile(rb"bootmgr\.exe", re.IGNORECASE)
pat2 = re.compile(rb"(\\.B.o.o.t.\\.)(B)(.)(C)(.)(D)", re.IGNORECASE)
bcd_name = bytes(
"\\g<1>"
+ bcd_name[0]
+ "\\g<3>"
+ bcd_name[1]
+ "\\g<5>"
+ bcd_name[2],
"utf-8",
)
with open(tl_file_name, "rb") as file:
out = file.read()
if not is_wimboot:
logger.info("Patching build Loader: %s", wl_file_name)
out = pat2.sub(bcd_name, out)
if tl_file_name != wl_file_name:
logger.info("Build Loader: %s from %s", wl_file_name, tl_file_name)
with open(wl_file_name, "wb+") as file:
file.write(out)
tgen.copy_single_distro_file(wl_file_name, web_dir, True)
if not is_wimboot:
if distro.os_version not in ("xp", "2003") or is_winpe:
pe = pefile.PE(wl_file_name, fast_load=True) # type: ignore
pe.OPTIONAL_HEADER.CheckSum = pe.generate_checksum() # type: ignore
pe.write(filename=wl_file_name) # type: ignore
with open(distro.kernel, "rb") as file:
data = file.read()
out = pat1.sub(bytes(meta["bootmgr"], "utf-8"), data)
if wk_file_name != distro.kernel:
logger.info(
"Build PXEBoot: %s from %s", wk_file_name, distro.kernel
)
with open(wk_file_name, "wb+") as file:
file.write(out)
tgen.copy_single_distro_file(wk_file_name, web_dir, True)
if is_bcd:
obcd_file_name = os.path.join(boot_path, "bcd")
bcd_file_name = os.path.join(distro_dir, meta["bcd"])
wim_file_name = "winpe.wim"
if not os.path.isfile(obcd_file_name):
logger.error("File not found: %s", obcd_file_name)
return 1
if is_winpe:
wim_file_name = meta["winpe"]
tftp_image = os.path.join("/images", distro.name)
if is_wimboot:
tftp_image = "/Boot"
wim_file_name = os.path.join(tftp_image, wim_file_name)
sdi_file_name = os.path.join(tftp_image, os.path.basename(distro.initrd))
logger.info(
"Build BCD: %s from %s for %s",
bcd_file_name,
obcd_file_name,
wim_file_name,
)
bcdedit(obcd_file_name, bcd_file_name, wim_file_name, sdi_file_name)
tgen.copy_single_distro_file(bcd_file_name, web_dir, True)
if is_winpe:
ps_file_name = os.path.join(distro_dir, meta["winpe"])
wim_pl_name = os.path.join(boot_path, "winpe.wim")
cmd = ["/usr/bin/cp", "--reflink=auto", wim_pl_name, ps_file_name]
utils.subprocess_call(cmd, shell=False)
if os.path.exists(WIMUPDATE):
data = templ.render(tmplstart_data, meta, None)
with tempfile.NamedTemporaryFile() as pi_file:
pi_file.write(bytes(data, "utf-8"))
pi_file.flush()
cmd = ["/usr/bin/wimdir", ps_file_name, "1"]
wimdir_result = utils.subprocess_get(cmd, shell=False)
wimdir_file_list = wimdir_result.split("\n")
# grep -i for /Windows/System32/startnet.cmd
startnet_path = "/Windows/System32/startnet.cmd"
for file in wimdir_file_list:
if file.lower() == startnet_path.lower():
startnet_path = file
cmd = [
WIMUPDATE,
ps_file_name,
f"--command=add {pi_file.name} {startnet_path}",
]
utils.subprocess_call(cmd, shell=False)
tgen.copy_single_distro_file(ps_file_name, web_dir, True)
for profile in profiles:
distro: Optional["Distro"] = profile.get_conceptual_parent() # type: ignore
if distro is None:
raise ValueError("Distro not found!")
if distro.breed == "windows":
logger.info("Profile: %s", profile.name)
meta = utils.blender(api, False, profile)
autoinstall_meta = meta.get("autoinstall_meta", {})
meta.update(autoinstall_meta)
gen_win_files(distro, meta)
for system in systems:
profile = system.get_conceptual_parent()
autoinstall_meta = system.autoinstall_meta
if not profile or not autoinstall_meta or autoinstall_meta == {}:
continue
distro = profile.get_conceptual_parent() # type: ignore
if distro and distro.breed == "windows": # type: ignore[reportUnknownMemberType]
logger.info("System: %s", system.name)
meta = utils.blender(api, False, system)
autoinstall_meta = meta.get("autoinstall_meta", {})
meta.update(autoinstall_meta)
gen_win_files(distro, meta) # type: ignore[reportUnknownArgumentType]
return 0
| 23,952
|
Python
|
.py
| 515
| 35.739806
| 117
| 0.571704
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,148
|
scm_track.py
|
cobbler_cobbler/cobbler/modules/scm_track.py
|
"""
Cobbler Trigger Module that puts the content of the Cobbler data directory under version control. Depending on
``scm_track_mode`` in the settings, this can either be git or Mercurial.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2009, Red Hat Inc.
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import os
from typing import TYPE_CHECKING, Any
from cobbler import utils
from cobbler.cexceptions import CX
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
indicates the trigger type
:return: Always: ``/var/lib/cobbler/triggers/change/*``
"""
return "/var/lib/cobbler/triggers/change/*"
def run(api: "CobblerAPI", args: Any):
"""
Runs the trigger, meaning in this case track any changed which happen to a config or data file.
:param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.
:param args: The parameter is currently unused for this trigger.
:return: 0 on success, otherwise an exception is risen.
"""
settings = api.settings()
if not settings.scm_track_enabled:
# feature disabled
return 0
mode = str(settings.scm_track_mode).lower()
author = str(settings.scm_track_author)
push_script = str(settings.scm_push_script)
if mode == "git":
old_dir = os.getcwd()
os.chdir("/var/lib/cobbler")
if os.getcwd() != "/var/lib/cobbler":
raise CX("danger will robinson")
if not os.path.exists("/var/lib/cobbler/.git"):
utils.subprocess_call(["git", "init"], shell=False)
# FIXME: If we know the remote user of an XMLRPC call use them as the author
utils.subprocess_call(["git", "add", "--all", "collections"], shell=False)
utils.subprocess_call(["git", "add", "--all", "templates"], shell=False)
utils.subprocess_call(["git", "add", "--all", "snippets"], shell=False)
utils.subprocess_call(
["git", "commit", "-m", "API update", "--author", author], shell=False
)
if push_script:
utils.subprocess_call(push_script.split(" "), shell=False)
os.chdir(old_dir)
return 0
if mode == "hg":
# use mercurial
old_dir = os.getcwd()
os.chdir("/var/lib/cobbler")
if os.getcwd() != "/var/lib/cobbler":
raise CX("danger will robinson")
if not os.path.exists("/var/lib/cobbler/.hg"):
utils.subprocess_call(["hg", "init"], shell=False)
# FIXME: If we know the remote user of an XMLRPC call use them as the user
utils.subprocess_call(["hg", "add collections"], shell=False)
utils.subprocess_call(["hg", "add templates"], shell=False)
utils.subprocess_call(["hg", "add snippets"], shell=False)
utils.subprocess_call(
["hg", "commit", "-m", "API update", "--user", author], shell=False
)
if push_script:
utils.subprocess_call(push_script.split(" "), shell=False)
os.chdir(old_dir)
return 0
raise CX(f"currently unsupported SCM type: {mode}")
| 3,298
|
Python
|
.py
| 72
| 38.75
| 117
| 0.647096
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,149
|
nsupdate_add_system_post.py
|
cobbler_cobbler/cobbler/modules/nsupdate_add_system_post.py
|
"""
Replace (or remove) records in DNS zone for systems created (or removed) by Cobbler
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Adrian Brzezinski <adrbxx@gmail.com>
# DNS toolkit for Python
# - python-dnspython (Debian)
# - python-dns (RH/CentOS)
import time
from typing import IO, TYPE_CHECKING, Any, List, Optional
import dns.query
import dns.resolver
import dns.tsigkeyring
import dns.update
from cobbler.cexceptions import CX
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
LOGF: Optional[IO[str]] = None
def nslog(msg: str) -> None:
"""
Log a message to the logger.
:param msg: The message to log.
"""
if LOGF is not None:
LOGF.write(msg)
def register() -> str:
"""
This method is the obligatory Cobbler registration hook.
:return: The trigger name or an empty string.
"""
if __name__ == "cobbler.modules.nsupdate_add_system_post":
return "/var/lib/cobbler/triggers/add/system/post/*"
if __name__ == "cobbler.modules.nsupdate_delete_system_pre":
return "/var/lib/cobbler/triggers/delete/system/pre/*"
return ""
def run(api: "CobblerAPI", args: List[Any]):
"""
This method executes the trigger, meaning in this case that it updates the dns configuration.
:param api: The api to read metadata from.
:param args: Metadata to log.
:return: "0" on success or a skipped task. If the task failed or problems occurred then an exception is raised.
"""
# Module level log file descriptor
global LOGF # pylint: disable=global-statement
action = None
if __name__ == "cobbler.modules.nsupdate_add_system_post":
action = "replace"
elif __name__ == "cobbler.modules.nsupdate_delete_system_pre":
action = "delete"
else:
return 0
settings = api.settings()
if not settings.nsupdate_enabled:
return 0
# read our settings
if str(settings.nsupdate_log) is not None: # type: ignore[reportUnnecessaryComparison]
LOGF = open(str(settings.nsupdate_log), "a", encoding="UTF-8") # type: ignore
nslog(f">> starting {__name__} {args}\n")
if str(settings.nsupdate_tsig_key) is not None: # type: ignore[reportUnnecessaryComparison]
keyring = dns.tsigkeyring.from_text(
{str(settings.nsupdate_tsig_key[0]): str(settings.nsupdate_tsig_key[1])}
)
else:
keyring = None
if str(settings.nsupdate_tsig_algorithm) is not None: # type: ignore[reportUnnecessaryComparison]
keyring_algo = str(settings.nsupdate_tsig_algorithm)
else:
keyring_algo = "HMAC-MD5.SIG-ALG.REG.INT"
# nslog( " algo %s, key %s : %s \n" % (keyring_algo,str(settings.nsupdate_tsig_key[0]),
# str(settings.nsupdate_tsig_key[1])) )
# get information about this system
system = api.find_system(args[0])
if system is None or isinstance(system, list):
raise ValueError("Search result was ambiguous!")
# process all interfaces and perform dynamic update for those with --dns-name
for name, interface in system.interfaces.items():
host = interface.dns_name
host_ip = interface.ip_address
if not system.is_management_supported(cidr_ok=False):
continue
if not host:
continue
if host.find(".") == -1:
continue
domain = ".".join(host.split(".")[1:]) # get domain from host name
host = host.split(".")[0] # strip domain
nslog(f"processing interface {name} : {interface}\n")
nslog(f"lookup for '{domain}' domain master nameserver... ")
# get master nameserver ip address
answers = dns.resolver.query(domain + ".", dns.rdatatype.SOA) # type: ignore
soa_mname = answers[0].mname # type: ignore
soa_mname_ip = None
for rrset in answers.response.additional: # type: ignore
if rrset.name == soa_mname: # type: ignore
soa_mname_ip = str(rrset.items[0].address) # type: ignore
if soa_mname_ip is None:
ip_address = dns.resolver.query(soa_mname, "A") # type: ignore
for answer in ip_address: # type: ignore
soa_mname_ip = answer.to_text() # type: ignore
nslog(f"{soa_mname} [{soa_mname_ip}]\n")
nslog(f"{action} dns record for {host}.{domain} [{host_ip}] .. ")
# try to update zone with new record
update = dns.update.Update(
domain + ".", keyring=keyring, keyalgorithm=keyring_algo # type: ignore
)
if action == "replace":
update.replace(host, 3600, dns.rdatatype.A, host_ip) # type: ignore
update.replace( # type: ignore
host,
3600,
dns.rdatatype.TXT, # type: ignore
f'"cobbler (date: {time.strftime("%c")})"',
)
else:
update.delete(host, dns.rdatatype.A, host_ip) # type: ignore
update.delete(host, dns.rdatatype.TXT) # type: ignore
try:
response = dns.query.tcp(update, soa_mname_ip) # type: ignore
rcode_txt = dns.rcode.to_text(response.rcode()) # type: ignore
except dns.tsig.PeerBadKey as error: # type: ignore
nslog("failed (refused key)\n>> done\n")
if LOGF is not None:
LOGF.close()
raise CX(
f"nsupdate failed, server '{soa_mname}' refusing our key"
) from error
nslog(f"response code: {rcode_txt}\n")
# notice user about update failure
if response.rcode() != dns.rcode.NOERROR: # type: ignore
nslog(">> done\n")
if LOGF is not None:
LOGF.close()
raise CX(
f"nsupdate failed (response: {rcode_txt}, name: {host}.{domain}, ip {host_ip}, name server {soa_mname})"
)
nslog(">> done\n")
if LOGF is not None:
LOGF.close()
return 0
| 6,050
|
Python
|
.py
| 139
| 35.381295
| 120
| 0.616695
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,150
|
dnsmasq.py
|
cobbler_cobbler/cobbler/modules/managers/dnsmasq.py
|
"""
This is some of the code behind 'cobbler sync'.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: John Eckersberg <jeckersb@redhat.com>
import time
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
from cobbler import utils
from cobbler.modules.managers import DhcpManagerModule, DnsManagerModule
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
from cobbler.items.system import System
MANAGER = None
def register() -> str:
"""
The mandatory Cobbler modules registration hook.
:return: Always "manage".
"""
return "manage"
class _DnsmasqManager(DnsManagerModule, DhcpManagerModule):
"""
Handles conversion of internal state to the tftpboot tree layout.
"""
@staticmethod
def what() -> str:
"""
This identifies the module.
:return: Will always return ``dnsmasq``.
"""
return "dnsmasq"
def __init__(self, api: "CobblerAPI"):
super().__init__(api)
self.config: Dict[str, Any] = {}
self.cobbler_hosts_file = self.api.settings().dnsmasq_hosts_file
self.ethers_file = self.api.settings().dnsmasq_ethers_file
utils.create_files_if_not_existing([self.cobbler_hosts_file, self.ethers_file])
def write_configs(self) -> None:
"""
DHCP files are written when ``manage_dhcp`` is set in our settings.
:raises OSError
:raises ValueError
"""
self.config = self.gen_full_config()
self._write_configs(self.config)
def _write_configs(self, config_data: Optional[Dict[Any, Any]] = None) -> None:
"""
Internal function to write DHCP files.
:raises OSError
:raises ValueError
"""
if not config_data:
raise ValueError("No config to write.")
settings_file = "/etc/dnsmasq.conf"
template_file = "/etc/cobbler/dnsmasq.template"
try:
with open(template_file, "r", encoding="UTF-8") as template_file_fd:
template_data = template_file_fd.read()
except Exception as error:
raise OSError(f"error writing template to file: {template_file}") from error
config_copy = config_data.copy() # template rendering changes the passed dict
self.logger.info("Writing %s", settings_file)
self.templar.render(template_data, config_copy, settings_file)
def gen_full_config(self) -> Dict[str, str]:
"""Generate DHCP configuration for all systems."""
system_definitions: Dict[str, str] = {}
for system in self.systems:
system_config = self._gen_system_config(system)
system_definitions = utils.merge_dicts_recursive(
system_definitions, system_config, str_append=True
)
metadata = {
"insert_cobbler_system_definitions": system_definitions.get("default", ""),
"date": time.asctime(time.gmtime()),
"cobbler_server": self.settings.server,
"next_server_v4": self.settings.next_server_v4,
"next_server_v6": self.settings.next_server_v6,
"addn_host_file": self.cobbler_hosts_file,
}
# now add in other DHCP expansions that are not tagged with "default"
for dhcp_tag, system_str in system_definitions.items():
if dhcp_tag == "default":
continue
metadata[f"insert_cobbler_system_definitions_{dhcp_tag}"] = system_str
return metadata
def remove_single_system(self, system_obj: "System") -> None:
"""
This method removes a single system.
:param system_obj: System to be removed.
"""
if not self.config:
self.config = self.gen_full_config()
system_config = self._gen_system_config(system_obj)
for dhcp_tag, system_iface_config in system_config.items():
config_key = "insert_cobbler_system_definitions"
if dhcp_tag != "default":
config_key = f"insert_cobbler_system_definitions_{dhcp_tag}"
if system_iface_config not in self.config[config_key]:
continue
self.config[config_key] = self.config[config_key].replace(
system_iface_config, ""
)
self._write_configs(self.config)
self.remove_single_ethers_entry(system_obj)
def _gen_system_config(
self,
system_obj: "System",
) -> Dict[str, str]:
"""
Generate dnsmasq config for a single system.
:param system_obj: System to generate dnsmasq config for
"""
# we used to just loop through each system, but now we must loop
# through each network interface of each system.
system_definitions: Dict[str, str] = {}
if not system_obj.is_management_supported(cidr_ok=False):
self.logger.debug(
"%s does not meet precondition: MAC, IPv4, or IPv6 address is required.",
system_obj.name,
)
return {}
profile = system_obj.get_conceptual_parent()
if profile is None:
raise ValueError("Profile for system not found!")
distro: Optional["Distro"] = profile.get_conceptual_parent() # type: ignore
if distro is None:
raise ValueError("Distro for system not found!")
for interface in system_obj.interfaces.values():
mac = interface.mac_address
ip_address = interface.ip_address
host = interface.dns_name
ipv6 = interface.ipv6_address
if not mac:
# can't write a DHCP entry for this system
continue
# In many reallife situations there is a need to control the IP address and hostname for a specific
# client when only the MAC address is available. In addition to that in some scenarios there is a need
# to explicitly label a host with the applicable architecture in order to correctly handle situations
# where we need something other than ``pxelinux.0``. So we always write a dhcp-host entry with as much
# info as possible to allow maximum control and flexibility within the dnsmasq config.
systxt = "dhcp-host=net:" + distro.arch.value.lower() + "," + mac
if host != "":
systxt += "," + host
if ip_address != "":
systxt += "," + ip_address
if ipv6 != "":
systxt += f",[{ipv6}]"
systxt += "\n"
dhcp_tag = interface.dhcp_tag
if dhcp_tag == "":
dhcp_tag = "default"
if dhcp_tag not in system_definitions:
system_definitions[dhcp_tag] = ""
system_definitions[dhcp_tag] = system_definitions[dhcp_tag] + systxt
return system_definitions
def _find_unique_dhcp_entries(
self, dhcp_entries: str, config_key: str
) -> Tuple[str, str]:
"""
This method checks the dhcp entries in the current config and returns
only those that are unique.
This is necessary because the sync_single_system method is used for
both adding and modifying a single system.
"""
unique_entries = ""
duplicate_entries = ""
for dhcp_entry in dhcp_entries.split("\n"):
if dhcp_entry and dhcp_entry not in self.config[config_key]:
unique_entries += f"{dhcp_entry}\n"
else:
duplicate_entries += f"{dhcp_entry}\n"
return unique_entries, duplicate_entries
def _parse_mac_from_dnsmasq_entries(self, entries: str) -> List[str]:
return [entry.split(",")[1] for entry in entries.split("\n") if entry]
def sync_single_system(self, system: "System"):
"""
Synchronize data for a single system.
:param system: A system to be added.
"""
if not self.config:
# cache miss, need full sync for consistent data
self.regen_ethers()
return self.sync()
system_config = self._gen_system_config(system)
updated_dhcp_entries: Dict[str, str] = {}
duplicate_dhcp_entries = ""
for iface_tag, dhcp_entries in system_config.items():
config_key = "insert_cobbler_system_definitions"
if iface_tag != "default":
config_key = f"insert_cobbler_system_definitions_{iface_tag}"
(
unique_dhcp_entries,
duplicate_dhcp_entries,
) = self._find_unique_dhcp_entries(dhcp_entries, config_key)
updated_dhcp_entries[config_key] = unique_dhcp_entries
self.config = utils.merge_dicts_recursive(
self.config, {config_key: unique_dhcp_entries}, str_append=True
)
if all(not dhcp_entry for dhcp_entry in updated_dhcp_entries.values()):
# No entries were updated. Therefore, User removed
# or modified already existing mac address and we don't
# know which mac address that was. Consequently, trigger
# a full sync to keep DNS and DHCP entries consistent.
self.regen_ethers()
return self.sync()
duplicate_macs = self._parse_mac_from_dnsmasq_entries(duplicate_dhcp_entries)
self.sync_single_ethers_entry(system, duplicate_macs)
self.config["date"] = time.asctime(time.gmtime())
self._write_configs(self.config)
return self.restart_service()
def regen_ethers(self) -> None:
"""
This function regenerates the ethers file. To get more information please read ``man ethers``, the format is
also in there described.
"""
# dnsmasq knows how to read this database of MACs -> IPs, so we'll keep it up to date every time we add a
# system.
with open(self.ethers_file, "w", encoding="UTF-8") as ethers_fh:
for system in self.systems:
ethers_entry = self._gen_single_ethers_entry(system)
if ethers_entry:
ethers_fh.write(ethers_entry)
def _gen_single_ethers_entry(
self, system_obj: "System", duplicate_macs: Optional[List[str]] = None
):
"""
Generate an ethers entry for a system, such as:
00:1A:2B:3C:4D:5E\t1.2.3.4\n
01:2B:3C:4D:5E:6F\t1.2.4.4\n
"""
if not system_obj.is_management_supported(cidr_ok=False):
self.logger.debug(
"%s does not meet precondition: MAC, IPv4, or IPv6 address is required.",
system_obj.name,
)
return ""
output = ""
for interface in system_obj.interfaces.values():
mac = interface.mac_address
ip_address = interface.ip_address
if not mac:
# can't write this w/o a MAC address
continue
if duplicate_macs and mac in duplicate_macs:
# explicitly skipping mac address
continue
if ip_address != "":
output += mac.upper() + "\t" + ip_address + "\n"
return output
def sync_single_ethers_entry(
self, system: "System", duplicate_macs: Optional[List[str]] = None
):
"""
This appends a new single system entry to the ethers file.
:param system: A system to be added.
"""
# dnsmasq knows how to read this database of MACs -> IPs, so we'll keep it up to date every time we add a
# system.
with open(self.ethers_file, "a", encoding="UTF-8") as ethers_fh:
host_entry = self._gen_single_ethers_entry(system, duplicate_macs)
if host_entry:
ethers_fh.write(host_entry)
def remove_single_ethers_entry(
self,
system: "System",
):
"""
This adds a new single system entry to the ethers file.
:param system: A system to be removed.
"""
# dnsmasq knows how to read this database of MACs -> IPs, so we'll keep it up to date every time we add a
# system.
ethers_entry = self._gen_single_ethers_entry(system)
if not ethers_entry:
return
mac_addresses = self._extract_mac_from_ethers_entry(ethers_entry)
utils.remove_lines_in_file(self.ethers_file, mac_addresses)
def _extract_mac_from_ethers_entry(self, ethers_entry: str) -> List[str]:
"""
One ethers entry can contain multiple MAC addresses.
This method transforms:
00:1A:2B:3C:4D:5E\t1.2.3.4\n
01:2B:3C:4D:5E:6F\t1.2.4.4\n
To:
["00:1A:2B:3C:4D:5E", "01:2B:3C:4D:5E:6F"]
"""
return [line.split("\t")[0] for line in ethers_entry.split("\n") if line]
def remove_single_hosts_entry(self, system: "System"):
"""
This removes a single system entry from the Cobbler hosts file.
:param system: A system to be removed.
"""
host_entry = self._gen_single_host_entry(system)
if not host_entry:
return
utils.remove_lines_in_file(self.cobbler_hosts_file, [host_entry])
def _gen_single_host_entry(
self,
system_obj: "System",
):
if not system_obj.is_management_supported(cidr_ok=False):
self.logger.debug(
"%s does not meet precondition: MAC, IPv4, or IPv6 address is required.",
system_obj.name,
)
return ""
output = ""
for _, interface in system_obj.interfaces.items():
mac = interface.mac_address
host = interface.dns_name
cnames = " ".join(interface.cnames)
ipv4 = interface.ip_address
ipv6 = interface.ipv6_address
if not mac:
continue
if host != "" and ipv6 != "":
output += ipv6 + "\t" + host
elif host != "" and ipv4 != "":
output += ipv4 + "\t" + host
if cnames:
output += " " + cnames + "\n"
else:
output += "\n"
return output
def add_single_hosts_entry(
self,
system: "System",
):
"""
This adds a single system entry to the hosts file.
:param system: A system to be added.
"""
host_entries = self._gen_single_host_entry(system)
if not host_entries:
return
# This method can be used for editing, so
# remove duplicate entries first
with open(self.cobbler_hosts_file, "r", encoding="UTF-8") as fd:
for host_line in fd:
host_line = host_line.strip()
if host_line and host_line in host_entries:
host_entries = host_entries.replace(f"{host_line}\n", "")
if not host_entries.strip():
# No new entries present, we should trigger a full sync
self.regen_hosts()
return
with open(self.cobbler_hosts_file, "a", encoding="UTF-8") as regen_hosts_fd:
regen_hosts_fd.write(host_entries)
def regen_hosts(self) -> None:
"""
This rewrites the hosts file and thus also rewrites the dns config.
"""
# dnsmasq knows how to read this database for host info (other things may also make use of this later)
with open(self.cobbler_hosts_file, "w", encoding="UTF-8") as regen_hosts_fd:
for system in self.systems:
host_entry = self._gen_single_host_entry(system)
if host_entry:
regen_hosts_fd.write(host_entry)
def restart_service(self) -> int:
"""
This restarts the dhcp server and thus applied the newly written config files.
"""
service_name = "dnsmasq"
if self.settings.restart_dhcp:
return_code_service_restart = utils.process_management.service_restart(
service_name
)
if return_code_service_restart != 0:
self.logger.error("%s service failed", service_name)
return return_code_service_restart
return 0
def get_manager(api: "CobblerAPI") -> _DnsmasqManager:
"""
Creates a manager object to manage a dnsmasq server.
:param api: The API to resolve all information with.
:return: The object generated from the class.
"""
# Singleton used, therefore ignoring 'global'
global MANAGER # pylint: disable=global-statement
if not MANAGER:
MANAGER = _DnsmasqManager(api) # type: ignore
return MANAGER
| 16,935
|
Python
|
.py
| 384
| 33.682292
| 116
| 0.598118
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,151
|
bind.py
|
cobbler_cobbler/cobbler/modules/managers/bind.py
|
"""
This is some of the code behind 'cobbler sync'.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: John Eckersberg <jeckersb@redhat.com>
import re
import socket
import time
from typing import TYPE_CHECKING, Any, Dict, List, Tuple, Union
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.modules.managers import DnsManagerModule
from cobbler.utils import process_management
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
MANAGER = None
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "manage"
class MetadataZoneHelper:
"""
Helper class to hold data for template rendering of named config files.
"""
def __init__(
self,
forward_zones: List[str],
reverse_zones: List[Tuple[str, str]],
zone_include: str,
):
self.forward_zones: List[str] = forward_zones
self.reverse_zones: List[Tuple[str, str]] = reverse_zones
self.zone_include = zone_include
self.bind_master = ""
self.bind_zonefiles = ""
class _BindManager(DnsManagerModule):
@staticmethod
def what() -> str:
"""
Identifies what this class is managing.
:return: Always will return ``bind``.
"""
return "bind"
def __init__(self, api: "CobblerAPI"):
super().__init__(api)
self.settings_file = utils.namedconf_location()
self.zonefile_base = self.settings.bind_zonefile_path + "/"
def regen_hosts(self) -> None:
"""
Not used.
"""
@staticmethod
def __expand_ipv6(address: str) -> str:
"""
Expands an IPv6 address to long format i.e. ``xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx``
This function was created by Chris Miller, approved for GLP use, taken verbatim from:
https://forrst.com/posts/Python_Expand_Abbreviated_IPv6_Addresses-1kQ
:param address: Shortened IPv6 address.
:return: The full IPv6 address.
"""
full_address = "" # All groups
expanded_address = "" # Each group padded with leading zeroes
valid_group_count = 8
valid_group_size = 4
if "::" not in address: # All groups are already present
full_address = address
else: # Consecutive groups of zeroes have been collapsed with "::"
sides = address.split("::")
groups_present = 0
for side in sides:
if len(side) > 0:
groups_present += len(side.split(":"))
if len(sides[0]) > 0:
full_address += sides[0] + ":"
for _ in range(0, valid_group_count - groups_present):
full_address += "0000:"
if len(sides[1]) > 0:
full_address += sides[1]
if full_address[-1] == ":":
full_address = full_address[:-1]
groups = full_address.split(":")
for group in groups:
while len(group) < valid_group_size:
group = "0" + group
expanded_address += group + ":"
if expanded_address[-1] == ":":
expanded_address = expanded_address[:-1]
return expanded_address
def __forward_zones(self) -> Dict[str, Dict[str, List[str]]]:
"""
Returns a map of zones and the records that belong in them
"""
zones: Dict[str, Dict[str, List[str]]] = {}
forward_zones = self.settings.manage_forward_zones
if not isinstance(forward_zones, list): # type: ignore
# Gracefully handle when user inputs only a single zone as a string instead of a list with only a single
# item
forward_zones = [forward_zones]
for zone in forward_zones:
zones[zone] = {}
for system in self.systems:
for _, interface in system.interfaces.items():
host: str = interface.dns_name
ipv4: str = interface.ip_address
ipv6: str = interface.ipv6_address
ipv6_sec_addrs: List[str] = interface.ipv6_secondaries
if not system.is_management_supported(cidr_ok=False):
continue
if not host:
# gotsta have some dns_name and ip or else!
continue
if host.find(".") == -1:
continue
# Match the longest zone! E.g. if you have a host a.b.c.d.e
# if manage_forward_zones has:
# - c.d.e
# - b.c.d.e
# then a.b.c.d.e should go in b.c.d.e
best_match = ""
for zone in zones:
if re.search(rf"\.{zone}$", host) and len(zone) > len(best_match):
best_match = zone
# no match
if best_match == "":
continue
# strip the zone off the dns_name
host = re.sub(rf"\.{best_match}$", "", host)
# if we are to manage ipmi hosts, add that too
if self.settings.bind_manage_ipmi:
if system.power_address != "":
power_address_is_ip = False
# see if the power address is an IP
try:
socket.inet_aton(system.power_address)
power_address_is_ip = True
except socket.error:
power_address_is_ip = False
# if the power address is an IP, then add it to the DNS with the host suffix of "-ipmi"
# TODO: Perhpas the suffix can be configurable through settings?
if power_address_is_ip:
ipmi_host = host + "-ipmi"
ipmi_ips: List[str] = [system.power_address]
try:
zones[best_match][ipmi_host] = (
ipmi_ips + zones[best_match][ipmi_host]
)
except KeyError:
zones[best_match][ipmi_host] = ipmi_ips
# Create a list of IP addresses for this host
ips: List[str] = []
if ipv4:
ips.append(ipv4)
if ipv6:
ips.append(ipv6)
if ipv6_sec_addrs:
ips += ipv6_sec_addrs
if ips:
try:
zones[best_match][host] = ips + zones[best_match][host]
except KeyError:
zones[best_match][host] = ips
return zones
def __reverse_zones(self) -> Dict[Any, Any]:
"""
Returns a map of zones and the records that belong in them
:return: A dict with all zones.
"""
zones: Dict[str, Dict[str, Union[str, List[str]]]] = {}
reverse_zones = self.settings.manage_reverse_zones
if not isinstance(reverse_zones, list): # type: ignore
# Gracefully handle when user inputs only a single zone as a string instead of a list with only a single
# item
reverse_zones = [reverse_zones]
for zone in reverse_zones:
# expand and IPv6 zones
if ":" in zone:
zone = (self.__expand_ipv6(zone + "::1"))[:19]
zones[zone] = {}
for system in self.systems:
for _, interface in system.interfaces.items():
host = interface.dns_name
ip_address = interface.ip_address
ipv6 = interface.ipv6_address
ipv6_sec_addrs = interface.ipv6_secondaries
if not system.is_management_supported(cidr_ok=False):
continue
if not host or ((not ip_address) and (not ipv6)):
# gotta have some dns_name and ip or else!
continue
if ip_address:
# Match the longest zone! E.g. if you have an ip 1.2.3.4
# if manage_reverse_zones has:
# - 1.2
# - 1.2.3
# then 1.2.3.4 should go in 1.2.3
best_match = ""
for zone in zones:
if re.search(rf"^{zone}\.", ip_address) and len(zone) > len(
best_match
):
best_match = zone
if best_match != "":
# strip the zone off the front of the ip reverse the rest of the octets append the remainder
# + dns_name
ip_address = ip_address.replace(best_match, "", 1)
if ip_address[0] == ".": # strip leading '.' if it's there
ip_address = ip_address[1:]
tokens = ip_address.split(".")
tokens.reverse()
ip_address = ".".join(tokens)
zones[best_match][ip_address] = host + "."
if ipv6 or ipv6_sec_addrs:
ip6s: List[str] = []
if ipv6:
ip6s.append(ipv6)
for each_ipv6 in ip6s + ipv6_sec_addrs:
# convert the IPv6 address to long format
long_ipv6 = self.__expand_ipv6(each_ipv6)
# All IPv6 zones are forced to have the format xxxx:xxxx:xxxx:xxxx
zone = long_ipv6[:19]
ipv6_host_part = long_ipv6[20:]
tokens = list(re.sub(":", "", ipv6_host_part))
tokens.reverse()
ip_address = ".".join(tokens)
zones[zone][ip_address] = host + "."
return zones
def __write_named_conf(self) -> None:
"""
Write out the named.conf main config file from the template.
:raises OSError
"""
settings_file = self.settings.bind_chroot_path + self.settings_file
template_file = "/etc/cobbler/named.template"
# forward_zones = self.settings.manage_forward_zones
# reverse_zones = self.settings.manage_reverse_zones
metadata = MetadataZoneHelper(list(self.__forward_zones().keys()), [], "")
metadata.bind_zonefiles = self.settings.bind_zonefile_path
for zone in metadata.forward_zones:
txt = f"""
zone "{zone}." {{
type master;
file "{zone}";
}};
"""
metadata.zone_include = metadata.zone_include + txt
for zone in self.__reverse_zones():
# IPv6 zones are : delimited
if ":" in zone:
# if IPv6, assume xxxx:xxxx:xxxx:xxxx
# 0123456789012345678
long_zone = (self.__expand_ipv6(zone + "::1"))[:19]
tokens = list(re.sub(":", "", long_zone))
tokens.reverse()
arpa = ".".join(tokens) + ".ip6.arpa"
else:
# IPv4 address split by '.'
tokens = zone.split(".")
tokens.reverse()
arpa = ".".join(tokens) + ".in-addr.arpa"
#
metadata.reverse_zones.append((zone, arpa))
txt = f"""
zone "{arpa}." {{
type master;
file "{zone}";
}};
"""
metadata.zone_include = metadata.zone_include + txt
try:
with open(template_file, "r", encoding="UTF-8") as template_fd:
template_data = template_fd.read()
except Exception as error:
raise OSError(
f"error reading template from file: {template_file}"
) from error
self.logger.info("generating %s", settings_file)
self.templar.render(template_data, metadata.__dict__, settings_file)
def __write_secondary_conf(self) -> None:
"""
Write out the secondary.conf secondary config file from the template.
"""
settings_file = self.settings.bind_chroot_path + "/etc/secondary.conf"
template_file = "/etc/cobbler/secondary.template"
# forward_zones = self.settings.manage_forward_zones
# reverse_zones = self.settings.manage_reverse_zones
metadata = MetadataZoneHelper(list(self.__forward_zones().keys()), [], "")
metadata.bind_zonefiles = self.settings.bind_zonefile_path
for zone in metadata.forward_zones:
txt = f"""
zone "{zone}." {{
type slave;
masters {{
{self.settings.bind_master};
}};
file "data/{zone}";
}};
"""
metadata.zone_include = metadata.zone_include + txt
for zone in self.__reverse_zones():
# IPv6 zones are : delimited
if ":" in zone:
# if IPv6, assume xxxx:xxxx:xxxx:xxxx for the zone
# 0123456789012345678
long_zone = (self.__expand_ipv6(zone + "::1"))[:19]
tokens = list(re.sub(":", "", long_zone))
tokens.reverse()
arpa = ".".join(tokens) + ".ip6.arpa"
else:
# IPv4 zones split by '.'
tokens = zone.split(".")
tokens.reverse()
arpa = ".".join(tokens) + ".in-addr.arpa"
#
metadata.reverse_zones.append((zone, arpa))
txt = f"""
zone "{arpa}." {{
type slave;
masters {{
{self.settings.bind_master};
}};
file "data/{zone}";
}};
"""
metadata.zone_include = metadata.zone_include + txt
metadata.bind_master = self.settings.bind_master
try:
with open(template_file, "r", encoding="UTF-8") as template_fd:
template_data = template_fd.read()
except Exception as error:
raise OSError(
f"error reading template from file: {template_file}"
) from error
self.logger.info("generating %s", settings_file)
self.templar.render(template_data, metadata.__dict__, settings_file)
def __ip_sort(self, ips: List[str]) -> List[str]:
"""
Sorts IP addresses (or partial addresses) in a numerical fashion per-octet or quartet
:param ips: A list of all IP addresses (v6 and v4 mixed possible) which shall be sorted.
:return: The list with sorted IP addresses.
"""
quartets: List[List[int]] = []
octets: List[List[int]] = []
for each_ip in ips:
# IPv6 addresses are ':' delimited
if ":" in each_ip:
# IPv6
# strings to integer quartet chunks so we can sort numerically
quartets.append([int(i, 16) for i in each_ip.split(":")])
else:
# IPv4
# strings to integer octet chunks so we can sort numerically
octets.append([int(i) for i in each_ip.split(".")])
quartets.sort()
octets.sort()
# integers back to four character hex strings
quartets_str = [[format(i, "04x") for i in x] for x in quartets]
# integers back to strings
octets_str = [[str(i) for i in x] for x in octets]
return [".".join(i) for i in octets_str] + [":".join(i) for i in quartets_str]
def __pretty_print_host_records(
self, hosts: Dict[str, Any], rectype: str = "A", rclass: str = "IN"
) -> str:
"""
Format host records by order and with consistent indentation
:param hosts: The hosts to pretty print.
:param rectype: The record type.
:param rclass: The record class.
:return: A string with all pretty printed hosts.
"""
# Warns on hosts without dns_name, need to iterate over system to name the
# particular system
for system in self.systems:
for name, interface in system.interfaces.items():
if interface.dns_name == "":
self.logger.info(
"Warning: dns_name unspecified in the system: %s, while writing host records",
system.name,
)
names = list(hosts.keys())
if not names:
return "" # zones with no hosts
if rectype == "PTR":
names = self.__ip_sort(names)
else:
names.sort()
max_name = max([len(i) for i in names])
result = ""
for name in names:
spacing = " " * (max_name - len(name))
my_name = f"{name}{spacing}"
my_host_record = hosts[name]
my_host_list = []
if isinstance(my_host_record, str):
my_host_list = [my_host_record]
else:
my_host_list = my_host_record
for my_host in my_host_list:
my_rectype = rectype[:]
if rectype == "A":
if ":" in my_host:
my_rectype = "AAAA"
else:
my_rectype = "A "
result += f"{my_name} {rclass} {my_rectype} {my_host};\n"
return result
def __pretty_print_cname_records(
self, hosts: Dict[str, Any], rectype: str = "CNAME"
) -> str:
"""
Format CNAMEs and with consistent indentation
:param hosts: This parameter is currently unused.
:param rectype: The type of record which shall be pretty printed.
:return: The pretty printed version of the cname records.
"""
result = ""
# This loop warns and skips the host without dns_name instead of outright exiting
# Which results in empty records without any warning to the users
for system in self.systems:
for _, interface in system.interfaces.items():
cnames = interface.cnames
try:
if interface.dns_name != "":
dnsname = interface.dns_name.split(".")[0]
for cname in cnames:
result += f"{cname.split('.')[0]} {rectype} {dnsname};\n"
else:
self.logger.warning(
'CNAME generation for system "%s" was skipped due to a missing dns_name entry while writing'
"records!",
system.name,
)
continue
except Exception as exception:
self.logger.exception(
"Unspecified error during creation of CNAME for bind9 records!",
exc_info=exception,
)
return result
def __write_zone_files(self) -> None:
"""
Write out the forward and reverse zone files for all configured zones
"""
default_template_file = "/etc/cobbler/zone.template"
cobbler_server = self.settings.server
# this could be a config option too
serial_filename = "/var/lib/cobbler/bind_serial"
# need a counter for new bind format
serial = time.strftime("%Y%m%d00")
try:
with open(serial_filename, "r", encoding="UTF-8") as serialfd:
old_serial = serialfd.readline()
# same date
if serial[0:8] == old_serial[0:8]:
if int(old_serial[8:10]) < 99:
serial = f"{serial[0:8]}{int(old_serial[8:10]) + 1:.2d}"
except Exception:
pass
with open(serial_filename, "w", encoding="UTF-8") as serialfd:
serialfd.write(serial)
forward = self.__forward_zones()
reverse = self.__reverse_zones()
try:
with open(default_template_file, "r", encoding="UTF-8") as template_fd:
default_template_data = template_fd.read()
except Exception as error:
raise CX(
f"error reading template from file: {default_template_file}"
) from error
zonefileprefix = self.settings.bind_chroot_path + self.zonefile_base
for zone, hosts in forward.items():
metadata = {
"cobbler_server": cobbler_server,
"serial": serial,
"zonename": zone,
"zonetype": "forward",
"cname_record": "",
"host_record": "",
}
if ":" in zone:
long_zone = (self.__expand_ipv6(zone + "::1"))[:19]
tokens = list(re.sub(":", "", long_zone))
tokens.reverse()
zone_origin = ".".join(tokens) + ".ip6.arpa."
else:
zone_origin = ""
# grab zone-specific template if it exists
try:
with open(
f"/etc/cobbler/zone_templates/{zone}", encoding="UTF-8"
) as zone_fd:
# If this is an IPv6 zone, set the origin to the zone for this
# template
if zone_origin:
template_data = (
r"\$ORIGIN " + zone_origin + "\n" + zone_fd.read()
)
else:
template_data = zone_fd.read()
except Exception:
# If this is an IPv6 zone, set the origin to the zone for this
# template
if zone_origin:
template_data = (
r"\$ORIGIN " + zone_origin + "\n" + default_template_data
)
else:
template_data = default_template_data
metadata["cname_record"] = self.__pretty_print_cname_records(hosts)
metadata["host_record"] = self.__pretty_print_host_records(hosts)
zonefilename = zonefileprefix + zone
self.logger.info("generating (forward) %s", zonefilename)
self.templar.render(template_data, metadata, zonefilename)
for zone, hosts in reverse.items():
metadata = {
"cobbler_server": cobbler_server,
"serial": serial,
"zonename": zone,
"zonetype": "reverse",
"cname_record": "",
"host_record": "",
}
# grab zone-specific template if it exists
try:
with open(
f"/etc/cobbler/zone_templates/{zone}", encoding="UTF-8"
) as zone_fd:
template_data = zone_fd.read()
except Exception:
template_data = default_template_data
metadata["cname_record"] = self.__pretty_print_cname_records(hosts)
metadata["host_record"] = self.__pretty_print_host_records(
hosts, rectype="PTR"
)
zonefilename = zonefileprefix + zone
self.logger.info("generating (reverse) %s", zonefilename)
self.templar.render(template_data, metadata, zonefilename)
def write_configs(self) -> None:
"""
BIND files are written when ``manage_dns`` is set in our settings.
"""
self.__write_named_conf()
self.__write_secondary_conf()
self.__write_zone_files()
def restart_service(self) -> int:
"""
This syncs the bind server with it's new config files.
Basically this restarts the service to apply the changes.
"""
named_service_name = utils.named_service_name()
return process_management.service_restart(named_service_name)
def get_manager(api: "CobblerAPI") -> "_BindManager":
"""
This returns the object to manage a BIND server located locally on the Cobbler server.
:param api: The API to resolve all information with.
:return: The BindManger object to manage bind with.
"""
# Singleton used, therefore ignoring 'global'
global MANAGER # pylint: disable=global-statement
if not MANAGER:
MANAGER = _BindManager(api) # type: ignore
return MANAGER
| 24,645
|
Python
|
.py
| 564
| 30.101064
| 120
| 0.51978
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,152
|
isc.py
|
cobbler_cobbler/cobbler/modules/managers/isc.py
|
"""
This is some of the code behind 'cobbler sync'.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: John Eckersberg <jeckersb@redhat.com>
import shutil
import time
from typing import TYPE_CHECKING, Any, Dict, Optional, Set
from cobbler import enums, utils
from cobbler.enums import Archs
from cobbler.modules.managers import DhcpManagerModule
from cobbler.utils import process_management
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
from cobbler.items.profile import Profile
from cobbler.items.system import NetworkInterface, System
MANAGER = None
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "manage"
class _IscManager(DhcpManagerModule):
@staticmethod
def what() -> str:
"""
Static method to identify the manager.
:return: Always "isc".
"""
return "isc"
def __init__(self, api: "CobblerAPI"):
super().__init__(api)
self.settings_file_v4 = utils.dhcpconf_location(enums.DHCP.V4)
self.settings_file_v6 = utils.dhcpconf_location(enums.DHCP.V6)
# cache config to allow adding systems incrementally
self.config: Dict[str, Any] = {}
self.generic_entry_cnt = 0
def sync_single_system(self, system: "System"):
"""
Update the config with data for a single system, write it to the filesysemt, and restart DHCP service.
:param system: System object to generate the config for.
"""
if not self.config:
# cache miss, need full sync for consistent data
return self.sync()
profile: Optional["Profile"] = system.get_conceptual_parent() # type: ignore
distro: Optional["Distro"] = profile.get_conceptual_parent() # type: ignore
blend_data = utils.blender(self.api, False, system)
system_config = self._gen_system_config(system, blend_data, distro)
if all(
mac in self.config.get("dhcp_tags", {}).get(dhcp_tag, {})
for dhcp_tag, interface in system_config.items()
for mac in interface
):
# All interfaces in the added system are already cached. Therefore,
# user might have removed an interface and we don't know which.
# Trigger full sync.
return self.sync()
self.config = utils.merge_dicts_recursive(
self.config,
{"dhcp_tags": system_config},
)
self.config["date"] = time.asctime(time.gmtime())
self._write_configs(self.config)
return self.restart_service()
def remove_single_system(self, system_obj: "System") -> None:
if not self.config:
self.write_configs()
return
profile: Optional["Profile"] = system_obj.get_conceptual_parent() # type: ignore
distro: Optional["Distro"] = profile.get_conceptual_parent() # type: ignore
blend_data = utils.blender(self.api, False, system_obj)
system_config = self._gen_system_config(system_obj, blend_data, distro)
for dhcp_tag, mac_addresses in system_config.items():
for mac_address in mac_addresses:
self.config.get("dhcp_tags", {}).get(dhcp_tag, {}).pop(mac_address, "")
self.config["date"] = time.asctime(time.gmtime())
self._write_configs(self.config)
self.restart_service()
def _gen_system_config(
self,
system_obj: "System",
system_blend_data: Dict[str, Any],
distro_obj: Optional["Distro"],
) -> Dict[str, Any]:
"""
Generate DHCP config for a single system.
:param system_obj: System to generate DHCP config for
:param system_blend_data: utils.blender() data for the System
:param distro_object: Optional, is used to access distro-specific information like arch when present
"""
dhcp_tags: Dict[str, Any] = {"default": {}}
processed_system_master_interfaces: Set[str] = set()
ignore_macs: Set[str] = set()
if not system_obj.is_management_supported(cidr_ok=False):
self.logger.debug(
"%s does not meet precondition: MAC, IPv4, or IPv6 address is required.",
system_obj.name,
)
return {}
profile: Optional["Profile"] = system_obj.get_conceptual_parent() # type: ignore
for iface_name, iface_obj in system_obj.interfaces.items():
iface = iface_obj.to_dict()
mac = iface_obj.mac_address
if (
not self.settings.always_write_dhcp_entries
and not system_blend_data["netboot_enabled"]
and iface["static"]
):
continue
if not mac:
self.logger.warning("%s has no MAC address", system_obj.name)
continue
iface["gateway"] = iface_obj.if_gateway or system_obj.gateway
if iface["interface_type"] in (
"bond_slave",
"bridge_slave",
"bonded_bridge_slave",
):
if iface["interface_master"] not in system_obj.interfaces:
# Can't write DHCP entry: master interface does not exist
continue
master_name = iface["interface_master"]
master_iface = system_obj.interfaces[master_name]
# There may be multiple bonded interfaces, need composite index
system_master_name = f"{system_obj.name}-{master_name}"
if system_master_name not in processed_system_master_interfaces:
processed_system_master_interfaces.add(system_master_name)
else:
ignore_macs.add(mac)
# IPv4
iface["netmask"] = master_iface.netmask
iface["ip_address"] = master_iface.ip_address
if not iface["ip_address"]:
iface["ip_address"] = self._find_ip_addr(
system_obj.interfaces, prefix=master_name, ip_version="ipv4"
)
# IPv6
iface["ipv6_address"] = master_iface.ipv6_address
if not iface["ipv6_address"]:
iface["ipv6_address"] = self._find_ip_addr(
system_obj.interfaces, prefix=master_name, ip_version="ipv6"
)
# common
host = master_iface.dns_name
dhcp_tag = master_iface.dhcp_tag
else:
# TODO: simplify _slave / non_slave branches
host = iface["dns_name"]
dhcp_tag = iface["dhcp_tag"]
if distro_obj is not None:
iface["distro"] = distro_obj.to_dict()
if profile is not None:
iface["profile"] = profile.to_dict() # type: ignore
if host:
if iface_name == "eth0":
iface["name"] = host
else:
iface["name"] = f"{host}-{iface_name}"
else:
self.generic_entry_cnt += 1
iface["name"] = f"generic{self.generic_entry_cnt:d}"
for key in (
"next_server_v6",
"next_server_v4",
"filename",
"netboot_enabled",
"hostname",
"enable_ipxe",
"name_servers",
):
iface[key] = system_blend_data[key]
iface["owner"] = system_blend_data["name"]
# esxi
if distro_obj is not None and distro_obj.os_version.startswith("esxi"):
iface["filename_esxi"] = (
"esxi/system",
# config filename can be None
system_obj.get_config_filename(interface=iface_name, loader="pxe")
or "",
"mboot.efi",
)
elif distro_obj is not None and not iface["filename"]:
if distro_obj.arch in (
Archs.PPC,
Archs.PPC64,
Archs.PPC64LE,
Archs.PPC64EL,
):
iface["filename"] = "grub/grub.ppc64le"
elif distro_obj.arch == Archs.AARCH64:
iface["filename"] = "grub/grubaa64.efi"
if not dhcp_tag:
dhcp_tag = system_blend_data.get("dhcp_tag", "")
if dhcp_tag == "":
dhcp_tag = "default"
if dhcp_tag not in dhcp_tags:
dhcp_tags[dhcp_tag] = {mac: iface}
else:
dhcp_tags[dhcp_tag][mac] = iface
for macs in dhcp_tags.values():
for mac in macs:
if mac in ignore_macs:
del macs[mac]
return dhcp_tags
def _find_ip_addr(
self,
interfaces: Dict[str, "NetworkInterface"],
prefix: str,
ip_version: str,
) -> str:
"""Find the first interface with an IP address that begins with prefix."""
if ip_version.lower() == "ipv4":
attr_name = "ip_address"
elif ip_version.lower() == "ipv6":
attr_name = "ipv6_address"
else:
return ""
for name, obj in interfaces:
if name.startswith(prefix + ".") and hasattr(obj, attr_name):
return getattr(obj, attr_name)
return ""
def gen_full_config(self) -> Dict[str, Any]:
"""Generate DHCP configuration for all systems."""
dhcp_tags: Dict[str, Any] = {"default": {}}
self.generic_entry_cnt = 0
for system in self.systems:
profile: Optional["Profile"] = system.get_conceptual_parent() # type: ignore
if profile is None:
continue
distro: Optional["Distro"] = profile.get_conceptual_parent() # type: ignore
blended_system = utils.blender(self.api, False, system)
new_tags = self._gen_system_config(system, blended_system, distro)
dhcp_tags = utils.merge_dicts_recursive(dhcp_tags, new_tags)
metadata = {
"date": time.asctime(time.gmtime()),
"cobbler_server": f"{self.settings.server}:{self.settings.http_port}",
"next_server_v4": self.settings.next_server_v4,
"next_server_v6": self.settings.next_server_v6,
"dhcp_tags": dhcp_tags,
}
return metadata
def _write_config(
self,
config_data: Dict[Any, Any],
template_file: str,
settings_file: str,
) -> None:
"""DHCP files are written when ``manage_dhcp_v4`` or ``manage_dhcp_v6``
is set in the settings for the respective version. DHCPv4 files are
written when ``manage_dhcp_v4`` is set in our settings.
:param config_data: DHCP data to write.
:param template_file: The location of the DHCP template.
:param settings_file: The location of the final config file.
"""
try:
with open(template_file, "r", encoding="UTF-8") as template_fd:
template_data = template_fd.read()
except OSError as e:
self.logger.error("Can't read dhcp template '%s':\n%s", template_file, e)
return
config_copy = config_data.copy() # template rendering changes the passed dict
self.logger.info("Writing %s", settings_file)
self.templar.render(template_data, config_copy, settings_file)
def write_v4_config(
self,
config_data: Optional[Dict[Any, Any]] = None,
template_file: str = "/etc/cobbler/dhcp.template",
):
"""Write DHCP files for IPv4.
:param config_data: DHCP data to write.
:param template_file: The location of the DHCP template.
:param settings_file: The location of the final config file.
"""
if not config_data:
raise ValueError("No config to write.")
self._write_config(config_data, template_file, self.settings_file_v4)
def write_v6_config(
self,
config_data: Optional[Dict[Any, Any]] = None,
template_file: str = "/etc/cobbler/dhcp6.template",
):
"""Write DHCP files for IPv6.
:param config_data: DHCP data to write.
:param template_file: The location of the DHCP template.
:param settings_file: The location of the final config file.
"""
if not config_data:
raise ValueError("No config to write.")
self._write_config(config_data, template_file, self.settings_file_v6)
def restart_dhcp(self, service_name: str, version: int) -> int:
"""
This syncs the dhcp server with it's new config files.
Basically this restarts the service to apply the changes.
:param service_name: The name of the DHCP service.
"""
dhcpd_path = shutil.which(service_name)
if dhcpd_path is None:
self.logger.error("%s path could not be found", service_name)
return -1
return_code_service_restart = utils.subprocess_call(
[dhcpd_path, f"-{version}", "-t", "-q"], shell=False
)
if return_code_service_restart != 0:
self.logger.error("Testing config - %s -t failed", service_name)
if version == 4:
return_code_service_restart = process_management.service_restart(
service_name
)
else:
return_code_service_restart = process_management.service_restart(
f"{service_name}{version}"
)
if return_code_service_restart != 0:
self.logger.error("%s service failed", service_name)
return return_code_service_restart
def write_configs(self) -> None:
"""
DHCP files are written when ``manage_dhcp`` is set in our settings.
:raises OSError
:raises ValueError
"""
self.generic_entry_cnt = 0
self.config = self.gen_full_config()
self._write_configs(self.config)
def _write_configs(self, data: Optional[Dict[Any, Any]] = None) -> None:
if not data:
raise ValueError("No config to write.")
if self.settings.manage_dhcp_v4:
self.write_v4_config(data)
if self.settings.manage_dhcp_v6:
self.write_v6_config(data)
def restart_service(self) -> int:
if not self.settings.restart_dhcp:
return 0
# Even if one fails, try both and return an error
ret = 0
service = utils.dhcp_service_name()
if self.settings.manage_dhcp_v4:
ret |= self.restart_dhcp(service, 4)
if self.settings.manage_dhcp_v6:
ret |= self.restart_dhcp(service, 6)
return ret
def get_manager(api: "CobblerAPI") -> _IscManager:
"""
Creates a manager object to manage an isc dhcp server.
:param api: The API which holds all information in the current Cobbler instance.
:return: The object to manage the server with.
"""
# Singleton used, therefore ignoring 'global'
global MANAGER # pylint: disable=global-statement
if not MANAGER:
MANAGER = _IscManager(api) # type: ignore
return MANAGER
| 15,623
|
Python
|
.py
| 357
| 32.380952
| 110
| 0.577176
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,153
|
__init__.py
|
cobbler_cobbler/cobbler/modules/managers/__init__.py
|
"""
This module contains extensions for services Cobbler is managing. The services are restarted via the ``service`` command
or alternatively through the server executables directly. Cobbler does not announce the restarts but is expecting to be
allowed to do this on its own at any given time. Thus all services managed by Cobbler should not be touched by any
other tool or administrator.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2021 SUSE LLC
# SPDX-FileCopyrightText: Thomas Renninger <trenn@suse.de>
import logging
from abc import abstractmethod
from typing import TYPE_CHECKING, List
from cobbler import templar
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
from cobbler.items.system import System
class ManagerModule:
"""
Base class for Manager modules located in ``modules/manager/*.py``
These are typically but not necessarily used to manage systemd services.
Enabling can be done via settings ``manage_*`` (e.g. ``manage_dhcp``) and ``restart_*`` (e.g. ``restart_dhcp``).
Different modules could manage the same functionality as dhcp can be managed via isc.py or dnsmasq.py
(compare with ``/etc/cobbler/modules.py``).
"""
@staticmethod
def what() -> str:
"""
Static method to identify the manager module.
Must be overwritten by the inheriting class
"""
return "undefined"
def __init__(self, api: "CobblerAPI"):
"""
Constructor
:param api: The API instance to resolve all information with.
"""
self.logger = logging.getLogger()
self.api = api
self.distros = self.api.distros()
self.profiles = self.api.profiles()
self.systems = self.api.systems()
self.settings = self.api.settings()
self.repos = self.api.repos()
self.templar = templar.Templar(self.api)
def write_configs(self) -> None:
"""
Write module specific config files.
E.g. dhcp manager would write ``/etc/dhcpd.conf`` here
"""
def restart_service(self) -> int:
"""
Write module specific config files.
E.g. dhcp manager would write ``/etc/dhcpd.conf`` here
"""
return 0
def regen_ethers(self) -> None:
"""
ISC/BIND doesn't use this. It is there for compatibility reasons with other managers.
"""
def sync(self) -> int:
"""
This syncs the manager's server (systemd service) with it's new config files.
Basically this restarts the service to apply the changes.
:return: Integer return value of restart_service - 0 on success
"""
self.write_configs()
return self.restart_service()
def sync_single_system(self, system: "System") -> int:
"""
This synchronizes data for a single system. The default implementation is
to trigger full synchronization. Manager modules can overwrite this method
to improve performance.
:param system: A system to be added.
"""
del system # unused var
self.regen_ethers()
return self.sync()
def remove_single_system(self, system_obj: "System") -> None:
"""
This method removes a single system.
"""
del system_obj # unused var
self.regen_ethers()
self.sync()
class DhcpManagerModule(ManagerModule):
"""
TODO
"""
@abstractmethod
def sync_dhcp(self) -> None:
"""
TODO
"""
class DnsManagerModule(ManagerModule):
"""
TODO
"""
@abstractmethod
def regen_hosts(self) -> None:
"""
TODO
"""
def add_single_hosts_entry(self, system: "System") -> None:
"""
This method adds a single system to the host file.
DNS manager modules can implement this method to improve performance.
Otherwise, this method defaults to a full host regeneration.
:param system: A system to be added.
"""
del system # unused var
self.regen_hosts()
def remove_single_hosts_entry(self, system: "System") -> None:
"""
This method removes a single system from the host file.
DNS manager modules can implement this method to improve performance.
Otherwise, this method defaults to a full host regeneration.
:param system: A system to be removed.
"""
del system # unused var
self.regen_hosts()
class TftpManagerModule(ManagerModule):
"""
TODO
"""
@abstractmethod
def sync_systems(self, systems: List[str], verbose: bool = True) -> None:
"""
TODO
:param systems: TODO
:param verbose: TODO
"""
@abstractmethod
def write_boot_files(self) -> int:
"""
TODO
"""
return 1
@abstractmethod
def add_single_distro(self, distro: "Distro") -> None:
"""
TODO
:param distro: TODO
"""
| 5,091
|
Python
|
.py
| 144
| 28.222222
| 120
| 0.639454
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,154
|
ndjbdns.py
|
cobbler_cobbler/cobbler/modules/managers/ndjbdns.py
|
# coding=utf-8
"""
This is some of the code behind 'cobbler sync'.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2014, Mittwald CM Service GmbH & Co. KG
# SPDX-FileCopyrightText: Martin Helmich <m.helmich@mittwald.de>
# SPDX-FileCopyrightText: Daniel Kr√§mer <d.kraemer@mittwald.de>
import os
import subprocess
from typing import TYPE_CHECKING, Any, Dict
from cobbler.modules.managers import DnsManagerModule
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
MANAGER = None
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "manage"
class _NDjbDnsManager(DnsManagerModule):
"""
Support for Dr. D J Bernstein DNS server.
This DNS server has a lot of forks with IPv6 support. However, the original has no support for IPv6 and thus we
can't add support for it at the moment.
"""
@staticmethod
def what() -> str:
"""
Static method to identify the manager.
:return: Always "ndjbdns".
"""
return "ndjbdns"
def regen_hosts(self) -> None:
self.write_configs()
def write_configs(self) -> None:
"""
This writes the new dns configuration file to the disc.
"""
template_file = "/etc/cobbler/ndjbdns.template"
data_file = "/etc/ndjbdns/data"
data_dir = os.path.dirname(data_file)
a_records: Dict[str, str] = {}
with open(template_file, "r", encoding="UTF-8") as template_fd:
template_content = template_fd.read()
for system in self.systems:
for _, interface in list(system.interfaces.items()):
host = interface.dns_name
ip_address = interface.ip_address
if host:
if host in a_records:
raise Exception(f"Duplicate DNS name: {host}")
a_records[host] = ip_address
template_vars: Dict[str, Any] = {"forward": []}
for host, ip_address in list(a_records.items()):
template_vars["forward"].append((host, ip_address))
self.templar.render(template_content, template_vars, data_file)
with subprocess.Popen(
["/usr/bin/tinydns-data"], cwd=data_dir
) as subprocess_popen_obj:
subprocess_popen_obj.communicate()
if subprocess_popen_obj.returncode != 0:
raise Exception("Could not regenerate tinydns data file.")
def get_manager(api: "CobblerAPI") -> _NDjbDnsManager:
"""
Creates a manager object to manage an isc dhcp server.
:param api: The API which holds all information in the current Cobbler instance.
:return: The object to manage the server with.
"""
# Singleton used, therefore ignoring 'global'
global MANAGER # pylint: disable=global-statement
if not MANAGER:
MANAGER = _NDjbDnsManager(api) # type: ignore
return MANAGER
| 2,971
|
Python
|
.py
| 74
| 32.567568
| 115
| 0.647387
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,155
|
genders.py
|
cobbler_cobbler/cobbler/modules/managers/genders.py
|
"""
Cobbler Module that manages the cluster configuration tool from CHAOS. For more information please see:
`GitHub - chaos/genders <https://github.com/chaos/genders>`_
"""
import distutils.sysconfig
import logging
import os
import sys
import time
from typing import TYPE_CHECKING, Any, Dict, Union
from cobbler.templar import Templar
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
plib = distutils.sysconfig.get_python_lib()
mod_path = f"{plib}/cobbler"
sys.path.insert(0, mod_path)
TEMPLATE_FILE = "/etc/cobbler/genders.template"
SETTINGS_FILE = "/etc/genders"
logger = logging.getLogger()
def register() -> str:
"""
We should run anytime something inside of Cobbler changes.
:return: Always ``/var/lib/cobbler/triggers/change/*``
"""
return "/var/lib/cobbler/triggers/change/*"
def write_genders_file(
config: "CobblerAPI",
profiles_genders: Dict[str, str],
distros_genders: Dict[str, str],
mgmtcls_genders: Dict[str, str],
):
"""
Genders file is over-written when ``manage_genders`` is set in our settings.
:param config: The API instance to template the data with.
:param profiles_genders: The profiles which should be included.
:param distros_genders: The distros which should be included.
:param mgmtcls_genders: The management classes which should be included.
:raises OSError: Raised in case the template could not be read.
"""
try:
with open(TEMPLATE_FILE, "r", encoding="UTF-8") as template_fd:
template_data = template_fd.read()
except Exception as error:
raise OSError(f"error reading template: {TEMPLATE_FILE}") from error
metadata: Dict[str, Union[str, Dict[str, str]]] = {
"date": time.asctime(time.gmtime()),
"profiles_genders": profiles_genders,
"distros_genders": distros_genders,
"mgmtcls_genders": mgmtcls_genders,
}
templar_inst = Templar(config)
templar_inst.render(template_data, metadata, SETTINGS_FILE)
def run(api: "CobblerAPI", args: Any) -> int:
"""
Mandatory Cobbler trigger hook.
:param api: The api to resolve information with.
:param args: For this implementation unused.
:return: ``0`` or ``1``, depending on the outcome of the operation.
"""
# do not run if we are not enabled.
if not api.settings().manage_genders:
return 0
profiles_genders: Dict[str, str] = {}
distros_genders: Dict[str, str] = {}
mgmtcls_genders: Dict[str, str] = {}
# let's populate our dicts
# TODO: the lists that are created here are strictly comma separated.
# /etc/genders allows for host lists that are in the notation similar to: node00[01-07,08,09,70-71] at some point,
# need to come up with code to generate these types of lists.
# profiles
for prof in api.profiles():
# create the key
profiles_genders[prof.name] = ""
my_systems = api.find_system(profile=prof.name, return_list=True)
if my_systems is None or not isinstance(my_systems, list):
raise ValueError("Search error!")
for system in my_systems:
profiles_genders[prof.name] += system.name + ","
# remove a trailing comma
profiles_genders[prof.name] = profiles_genders[prof.name][:-1]
if profiles_genders[prof.name] == "":
profiles_genders.pop(prof.name, None)
# distros
for dist in api.distros():
# create the key
distros_genders[dist.name] = ""
my_systems = api.find_system(distro=dist.name, return_list=True)
if my_systems is None or not isinstance(my_systems, list):
raise ValueError("Search error!")
for system in my_systems:
distros_genders[dist.name] += system.name + ","
# remove a trailing comma
distros_genders[dist.name] = distros_genders[dist.name][:-1]
if distros_genders[dist.name] == "":
distros_genders.pop(dist.name, None)
# The file doesn't exist and for some reason the template engine won't create it, so spit out an error and tell the
# user what to do.
if not os.path.isfile(SETTINGS_FILE):
logger.info("Error: %s does not exist.", SETTINGS_FILE)
logger.info("Please run: touch %s as root and try again.", SETTINGS_FILE)
return 1
write_genders_file(api, profiles_genders, distros_genders, mgmtcls_genders)
return 0
| 4,424
|
Python
|
.py
| 103
| 37.019417
| 119
| 0.679544
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,156
|
import_signatures.py
|
cobbler_cobbler/cobbler/modules/managers/import_signatures.py
|
"""
Cobbler Module that contains the code for ``cobbler import`` and provides the magic to automatically detect an ISO image
OS and version.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: John Eckersberg <jeckersb@redhat.com>
import glob
import gzip
import os
import os.path
import re
import shutil
import stat
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
Union,
)
import magic # type: ignore
from cobbler import enums, utils
from cobbler.cexceptions import CX
from cobbler.modules.managers import ManagerModule
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
try:
import hivex # type: ignore
from hivex.hive_types import REG_EXPAND_SZ, REG_SZ # type: ignore
HAS_HIVEX = True
except ImportError:
REG_SZ = None # type: ignore[reportConstantRedefinition]
REG_EXPAND_SZ = None # type: ignore[reportConstantRedefinition]
HAS_HIVEX = False # type: ignore[reportConstantRedefinition]
# Import aptsources module if available to obtain repo mirror.
try:
from aptsources import distro as debdistro # type: ignore
from aptsources import sourceslist # type: ignore
APT_AVAILABLE = True
except ImportError:
APT_AVAILABLE = False # type: ignore
MANAGER = None
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "manage/import"
def import_walker(
top: str, func: Callable[[Any, str, List[str]], None], arg: Any
) -> None:
"""
Directory tree walk with callback function.
For each directory in the directory tree rooted at top (including top itself, but excluding '.' and '..'), call
``func(arg, dirname, filenames)``. dirname is the name of the directory, and filenames a list of the names of the
files and subdirectories in dirname (excluding '.' and '..'). ``func`` may modify the ``filenames`` list in-place
(e.g. via ``del`` or ``slice`` assignment), and walk will only recurse into the subdirectories whose names remain
in ``filenames``; this can be used to implement a filter, or to impose a specific order of visiting. No semantics
are defined for, or required of, ``arg``, beyond that arg is always passed to ``func``. It can be used, e.g., to
pass a filename pattern, or a mutable object designed to accumulate statistics.
:param top: The most top directory for which func should be run.
:param func: A function which is called as described in the above description.
:param arg: Passing ``None`` for this is common.
"""
try:
names = os.listdir(top)
except os.error:
return
func(arg, top, names)
for name in names:
path_name = os.path.join(top, name)
try:
file_stats = os.lstat(path_name)
except os.error:
continue
if stat.S_ISDIR(file_stats.st_mode):
import_walker(path_name, func, arg)
class _ImportSignatureManager(ManagerModule):
@staticmethod
def what() -> str:
"""
Identifies what service this manages.
:return: Always will return ``import/signatures``.
"""
return "import/signatures"
def __init__(self, api: "CobblerAPI") -> None:
super().__init__(api)
self.signature: Any = None
self.found_repos: Dict[str, int] = {}
def get_file_lines(self, filename: str) -> Union[List[str], List[bytes]]:
"""
Get lines from a file, which may or may not be compressed. If compressed then it will be uncompressed using
``gzip`` as the algorithm.
:param filename: The name of the file to be read.
:return: An array with all the lines.
"""
ftype = magic.detect_from_filename(filename) # type: ignore
if ftype.mime_type == "application/gzip": # type: ignore
try:
with gzip.open(filename, "r") as file_fd:
return file_fd.readlines()
except Exception:
pass
if (
ftype.mime_type == "application/x-ms-wim" # type: ignore
or self.file_version < (5, 37)
and filename.lower().endswith(".wim")
):
cmd = "/usr/bin/wiminfo"
if os.path.exists(cmd):
cmd = f"{cmd} {filename}"
return utils.subprocess_get(cmd).splitlines()
self.logger.info("no %s found, please install wimlib-utils", cmd)
elif ftype.mime_type == "text/plain": # type: ignore
with open(filename, "r", encoding="UTF-8") as file_fd:
return file_fd.readlines()
else:
self.logger.info(
'Could not detect the filetype "%s" and read the content of file "%s". Returning nothing.',
ftype.mime_type, # type: ignore
filename,
)
return []
def get_file_version(self) -> Tuple[int, int]:
"""
This calls file and asks for the version number.
:return: The major file version number.
"""
cmd = "/usr/bin/file"
if os.path.exists(cmd):
cmd = f"{cmd} -v"
version_list = utils.subprocess_get(cmd).splitlines()
if len(version_list) < 1:
return (0, 0)
version_list = version_list[0].split("-")
if len(version_list) != 2 or version_list[0] != "file":
return (0, 0)
version_list = version_list[1].split(".")
if len(version_list) < 2:
return (0, 0)
version: List[int] = []
for v in version_list:
version.append(int(v))
return (version[0], version[1])
return (0, 0)
def run(
self,
path: str,
name: str,
network_root: Optional[str] = None,
autoinstall_file: Optional[str] = None,
arch: Optional[str] = None,
breed: Optional[str] = None,
os_version: Optional[str] = None,
) -> None:
"""
This is the main entry point in a manager. It is a required function for import modules.
:param path: the directory we are scanning for files
:param name: the base name of the distro
:param network_root: the remote path (nfs/http/ftp) for the distro files
:param autoinstall_file: user-specified response file, which will override the default
:param arch: user-specified architecture
:param breed: user-specified breed
:param os_version: user-specified OS version
:raises CX
"""
self.name = name
self.network_root = network_root
self.autoinstall_file = autoinstall_file
self.arch = arch
self.breed = breed
self.os_version = os_version
self.file_version: Tuple[int, int] = self.get_file_version()
self.path = path
self.rootdir = path
self.pkgdir = path
# some fixups for the XMLRPC interface, which does not use "None"
if self.arch == "":
self.arch = None
if self.name == "":
self.name = None
if self.autoinstall_file == "":
self.autoinstall_file = None
if self.os_version == "": # type: ignore
self.os_version = None
if self.network_root == "":
self.network_root = None
if self.os_version and not self.breed: # type: ignore
utils.die(
"OS version can only be specified when a specific breed is selected"
)
self.signature = self.scan_signatures()
if not self.signature:
error_msg = f"No signature matched in {path}"
self.logger.error(error_msg)
raise CX(error_msg)
# now walk the filesystem looking for distributions that match certain patterns
self.logger.info("Adding distros from path %s:", self.path)
if self.breed == "windows": # type: ignore
self.import_winpe()
distros_added: List["Distro"] = []
import_walker(self.path, self.distro_adder, distros_added)
if len(distros_added) == 0:
self.logger.warning("No distros imported, bailing out")
return
# find out if we can auto-create any repository records from the install tree
if self.network_root is None:
self.logger.info("associating repos")
# FIXME: this automagic is not possible (yet) without mirroring
self.repo_finder(distros_added)
def scan_signatures(self) -> Optional[Any]:
"""
Loop through the signatures, looking for a match for both the signature directory and the version file.
"""
sigdata = self.api.get_signatures()
# self.logger.debug("signature cache: %s" % str(sigdata))
for breed in list(sigdata["breeds"].keys()):
if self.breed and self.breed != breed: # type: ignore
continue
for version in list(sigdata["breeds"][breed].keys()):
if self.os_version and self.os_version != version: # type: ignore
continue
for sig in sigdata["breeds"][breed][version].get("signatures", []):
pkgdir = os.path.join(self.path, sig)
if os.path.exists(pkgdir):
self.logger.debug(
"Found a candidate signature: breed=%s, version=%s",
breed,
version,
)
f_re = re.compile(
sigdata["breeds"][breed][version]["version_file"]
)
for root, subdir, fnames in os.walk(self.path):
for fname in fnames + subdir:
if f_re.match(fname):
# if the version file regex exists, we use it to scan the contents of the target
# version file to ensure it's the right version
if sigdata["breeds"][breed][version][
"version_file_regex"
]:
vf_re = re.compile(
sigdata["breeds"][breed][version][
"version_file_regex"
]
)
vf_lines = self.get_file_lines(
os.path.join(root, fname)
)
for line in vf_lines:
if vf_re.match(line):
break
else:
continue
self.logger.debug(
"Found a matching signature: breed=%s, version=%s",
breed,
version,
)
if not self.breed: # type: ignore
self.breed = breed
if not self.os_version: # type: ignore
self.os_version = version
if not self.autoinstall_file:
self.autoinstall_file = sigdata["breeds"][
breed
][version]["default_autoinstall"]
self.pkgdir = pkgdir
return sigdata["breeds"][breed][version]
return None
# required function for import modules
def get_valid_arches(self) -> List[Any]:
"""
Get all valid architectures from the signature file.
:return: An empty list or all valid architectures.
"""
if self.signature:
return sorted(self.signature["supported_arches"], key=lambda s: -1 * len(s))
return []
def get_valid_repo_breeds(self) -> List[Any]:
"""
Get all valid repository architectures from the signatures file.
:return: An empty list or all valid architectures.
"""
if self.signature:
return self.signature["supported_repo_breeds"]
return []
def distro_adder(
self, distros_added: List["Distro"], dirname: str, filenames: List[str]
) -> None:
"""
This is an import_walker routine that finds distributions in the directory to be scanned and then creates them.
:param distros_added: Unknown what this currently does.
:param dirname: Unknown what this currently does.
:param filenames: Unknown what this currently does.
"""
re_krn = re.compile(self.signature["kernel_file"])
re_img = re.compile(self.signature["initrd_file"])
# make sure we don't mismatch PAE and non-PAE types
initrd = None
kernel = None
pae_initrd = None
pae_kernel = None
for filename in filenames:
adtls: List["Distro"] = []
# Most of the time we just want to ignore isolinux directories, unless this is one of the oddball distros
# where we do want it.
if dirname.find("isolinux") != -1 and not self.signature["isolinux_ok"]:
continue
fullname = os.path.join(dirname, filename)
if os.path.islink(fullname) and os.path.isdir(fullname):
if fullname.startswith(self.path):
# Prevent infinite loop with Sci Linux 5
# self.logger.warning("avoiding symlink loop")
continue
self.logger.info("following symlink: %s", fullname)
import_walker(fullname, self.distro_adder, distros_added)
if re_img.match(filename):
if filename.find("PAE") == -1:
initrd = os.path.join(dirname, filename)
else:
pae_initrd = os.path.join(dirname, filename)
if re_krn.match(filename):
if filename.find("PAE") == -1:
kernel = os.path.join(dirname, filename)
else:
pae_kernel = os.path.join(dirname, filename)
elif self.breed == "windows" and "wimboot" in self.signature["kernel_file"]: # type: ignore
kernel = os.path.join(self.settings.tftpboot_location, "wimboot")
# if we've collected a matching kernel and initrd pair, turn them in and add them to the list
if initrd is not None and kernel is not None:
adtls.extend(self.add_entry(dirname, kernel, initrd))
kernel = None
initrd = None
elif pae_initrd is not None and pae_kernel is not None:
adtls.extend(self.add_entry(dirname, pae_kernel, pae_initrd))
pae_kernel = None
pae_initrd = None
distros_added.extend(adtls)
def add_entry(self, dirname: str, kernel: str, initrd: str) -> List["Distro"]:
"""
When we find a directory with a valid kernel/initrd in it, create the distribution objects as appropriate and
save them. This includes creating xen and rescue distros/profiles if possible.
:param dirname: Unkown what this currently does.
:param kernel: Unkown what this currently does.
:param initrd: Unkown what this currently does.
:return: Unkown what this currently does.
"""
# build a proposed name based on the directory structure
proposed_name = self.get_proposed_name(dirname, kernel)
# build a list of arches found in the packages directory
archs = self.learn_arch_from_tree()
if not archs and self.arch:
archs.append(self.arch)
else:
if self.arch and self.arch not in archs:
utils.die(
f"Given arch ({self.arch}) not found on imported tree {self.path}"
)
if len(archs) == 0:
self.logger.error(
"No arch could be detected in %s, and none was specified via the --arch option",
dirname,
)
return []
if len(archs) > 1:
self.logger.warning("- Warning : Multiple archs found : %s", archs)
distros_added: List["Distro"] = []
for pxe_arch in archs:
name = proposed_name + "-" + pxe_arch
existing_distro = self.distros.find(name=name)
if existing_distro is not None:
self.logger.warning(
"skipping import, as distro name already exists: %s", name
)
continue
if name.find("-autoboot") != -1:
# this is an artifact of some EL-3 imports
continue
new_distro = self.api.new_distro(
name=name,
kernel=kernel,
initrd=initrd,
arch=pxe_arch,
breed=self.breed, # type: ignore
os_version=self.os_version, # type: ignore
kernel_options=self.signature.get("kernel_options", {}),
kernel_options_post=self.signature.get("kernel_options_post", {}),
template_files=self.signature.get("template_files", {}),
)
# This logic is temporary until https://github.com/cobbler/libcobblersignatures/issues/77 is implemented
boot_files: Dict[str, str] = {}
for boot_file in self.signature["boot_files"]:
boot_files[f"$local_img_path/{boot_file}"] = f"{self.path}/{boot_file}"
# template_files must be a dict because during creation of the distro we explicitly set it as such.
# Also the property template_files is NOT inheritable.
new_distro.template_files.update(boot_files)
self.configure_tree_location(new_distro)
self.api.add_distro(new_distro, save=True)
distros_added.append(new_distro)
# see if the profile name is already used, if so, skip it and do not modify the existing profile
existing_profile = self.profiles.find(name=name)
if existing_profile is not None:
self.logger.info(
"skipping existing profile, name already exists: %s", name
)
continue
new_profile = self.api.new_profile(
name=name,
distro=name,
autoinstall=self.autoinstall_file, # type: ignore
)
# depending on the name of the profile we can define a good virt-type for usage with koan
if name.find("-xen") != -1:
new_profile.virt_type = enums.VirtType.XENPV
elif name.find("vmware") != -1:
new_profile.virt_type = enums.VirtType.VMWARE
else:
new_profile.virt_type = enums.VirtType.KVM
if self.breed == "windows": # type: ignore
dest_path = os.path.join(self.path, "boot")
kernel_path = f"http://@@http_server@@/images/{name}/wimboot"
if new_distro.os_version in ("xp", "2003"):
kernel_path = "pxeboot.0"
bootmgr = "bootmgr.exe"
if "wimboot" in kernel:
bootmgr = "bootmgr.efi"
bootmgr_path = os.path.join(dest_path, bootmgr)
bcd_path = os.path.join(dest_path, "bcd")
winpe_path = os.path.join(dest_path, "winpe.wim")
if (
os.path.exists(bootmgr_path)
and os.path.exists(bcd_path)
and os.path.exists(winpe_path)
):
new_profile.autoinstall_meta = {
"kernel": kernel_path,
"bootmgr": bootmgr,
"bcd": "bcd",
"winpe": "winpe.wim",
"answerfile": "autounattended.xml",
"post_install_script": "post_install.cmd",
}
self.api.add_profile(new_profile, save=True)
return distros_added
def learn_arch_from_tree(self) -> List[Any]:
"""
If a distribution is imported from DVD, there is a good chance the path doesn't contain the arch and we should
add it back in so that it's part of the meaningful name ... so this code helps figure out the arch name. This
is important for producing predictable distro names (and profile names) from differing import sources.
:return: The guessed architecture from a distribution dvd.
"""
result: Dict[str, int] = {}
# FIXME : this is called only once, should not be a walk
import_walker(self.path, self.arch_walker, result)
if result.pop("amd64", False):
result["x86_64"] = 1
if result.pop("i686", False):
result["i386"] = 1
if result.pop("i586", False):
result["i386"] = 1
if result.pop("x86", False):
result["i386"] = 1
if result.pop("arm64", False):
result["aarch64"] = 1
return list(result.keys())
def arch_walker(self, foo: Dict[Any, Any], dirname: str, fnames: List[Any]) -> None:
"""
Function for recursively searching through a directory for a kernel file matching a given architecture, called
by ``learn_arch_from_tree()``
:param foo: Into this dict there will be put additional meta information.
:param dirname: The directory name where the kernel can be found.
:param fnames: This should be a list like object which will be looped over.
"""
re_krn = re.compile(self.signature["kernel_arch"])
# try to find a kernel header RPM and then look at it's arch.
for fname in fnames:
if re_krn.match(fname):
if self.signature["kernel_arch_regex"]:
re_krn2 = re.compile(self.signature["kernel_arch_regex"])
krn_lines = self.get_file_lines(os.path.join(dirname, fname))
for line in krn_lines:
match_obj = re_krn2.match(line)
if match_obj:
for group in match_obj.groups():
group = group.lower()
if group in self.get_valid_arches():
foo[group] = 1
else:
for arch in self.get_valid_arches():
if fname.find(arch) != -1:
foo[arch] = 1
break
for arch in ["i686", "amd64"]:
if fname.find(arch) != -1:
foo[arch] = 1
break
def get_proposed_name(self, dirname: str, kernel: Optional[str] = None) -> str:
"""
Given a directory name where we have a kernel/initrd pair, try to autoname the distribution (and profile) object
based on the contents of that path.
:param dirname: The directory where the distribution is living in.
:param kernel: The kernel of that distro.
:return: The name which is recommended.
"""
if self.name is None:
raise ValueError("Name cannot be None!")
if self.network_root is not None:
name = self.name
else:
# remove the part that says /var/www/cobbler/distro_mirror/name
name = "-".join(dirname.split("/")[5:])
if kernel is not None:
if kernel.find("PAE") != -1 and name.find("PAE") == -1:
name += "-PAE"
if kernel.find("xen") != -1 and name.find("xen") == -1:
name += "-xen"
# Clear out some cruft from the proposed name
name = name.replace("--", "-")
for name_suffix in (
"-netboot",
"-ubuntu-installer",
"-amd64",
"-i386",
"-images",
"-pxeboot",
"-install",
"-isolinux",
"-boot",
"-suseboot",
"-loader",
"-os",
"-tree",
"var-www-cobbler-",
"distro_mirror-",
):
name = name.replace(name_suffix, "")
# remove any architecture name related string, as real arch will be appended later
name = name.replace("chrp", "ppc64")
for separator in ["-", "_", "."]:
for arch in [
"i386",
"x86_64",
"ia64",
"ppc64le",
"ppc64el",
"ppc64",
"ppc32",
"ppc",
"x86",
"s390x",
"s390",
"386",
"amd",
]:
name = name.replace(f"{separator}{arch}", "")
return name
def configure_tree_location(self, distribution: "Distro") -> None:
"""
Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use
for automating the Linux distribution installation, and create a autoinstall_meta variable $tree that contains
this.
:param distribution: The distribution object for that the tree should be configured.
"""
base = self.rootdir
# how we set the tree depends on whether an explicit network_root was specified
if self.network_root is None:
dest_link = os.path.join(self.settings.webdir, "links", distribution.name)
# create the links directory only if we are mirroring because with SELinux Apache can't symlink to NFS
# (without some doing)
if not os.path.exists(dest_link):
try:
self.logger.info(
"trying symlink: %s -> %s", str(base), str(dest_link)
)
os.symlink(base, dest_link)
except Exception:
# FIXME: This shouldn't happen but I've seen it ... debug ...
self.logger.warning(
"symlink creation failed: %s, %s", base, dest_link
)
protocol = self.api.settings().autoinstall_scheme
tree = f"{protocol}://@@http_server@@/cblr/links/{distribution.name}"
self.set_install_tree(distribution, tree)
else:
# Where we assign the automated installation file source is relative to our current directory and the input
# start directory in the crawl. We find the path segments between and tack them on the network source
# path to find the explicit network path to the distro that Anaconda can digest.
tail = filesystem_helpers.path_tail(self.path, base)
tree = self.network_root[:-1] + tail
self.set_install_tree(distribution, tree)
def set_install_tree(self, distribution: "Distro", url: str) -> None:
"""
Simple helper function to set the tree automated installation metavariable.
:param distribution: The distribution object for which the install tree should be set.
:param url: The url for the tree.
"""
self.logger.debug('Setting "tree" for distro "%s"', distribution.name)
distribution.autoinstall_meta = {"tree": url}
# ==========================================================================
# Repo Functions
def repo_finder(self, distros_added: List["Distro"]) -> None:
"""
This routine looks through all distributions and tries to find any applicable repositories in those
distributions for post-install usage.
:param distros_added: This is an iteratable set of distributions.
"""
for repo_breed in self.get_valid_repo_breeds():
self.logger.info("checking for %s repo(s)", repo_breed)
repo_adder: Optional[Callable[["Distro"], None]] = None
if repo_breed == "yum":
repo_adder = self.yum_repo_adder
elif repo_breed == "rhn":
repo_adder = self.rhn_repo_adder
elif repo_breed == "rsync":
repo_adder = self.rsync_repo_adder
elif repo_breed == "apt":
repo_adder = self.apt_repo_adder
else:
self.logger.warning(
"skipping unknown/unsupported repo breed: %s", repo_breed
)
continue
for current_distro_added in distros_added:
if current_distro_added.kernel.find("distro_mirror") != -1:
repo_adder(current_distro_added)
self.api.add_distro(
current_distro_added, save=True, with_triggers=False
)
else:
self.logger.info(
"skipping distro %s since it isn't mirrored locally",
current_distro_added.name,
)
# ==========================================================================
# yum-specific
def yum_repo_adder(self, distro: "Distro") -> None:
"""
For yum, we recursively scan the rootdir for repos to add
:param distro: The distribution object to scan and possibly add.
"""
self.logger.info("starting descent into %s for %s", self.rootdir, distro.name)
import_walker(self.rootdir, self.yum_repo_scanner, distro)
def yum_repo_scanner(
self, distro: "Distro", dirname: str, fnames: Iterable[str]
) -> None:
"""
This is an import_walker routine that looks for potential yum repositories to be added to the configuration for
post-install usage.
:param distro: The distribution object to check for.
:param dirname: The folder with repositories to check.
:param fnames: Unkown what this does exactly.
"""
matches = {}
for fname in fnames:
if fname in ("base", "repodata"):
self.logger.info("processing repo at : %s", dirname)
# only run the repo scanner on directories that contain a comps.xml
gloob1 = glob.glob(f"{dirname}/{fname}/*comps*.xml")
if len(gloob1) >= 1:
if dirname in matches:
self.logger.info(
"looks like we've already scanned here: %s", dirname
)
continue
self.logger.info("need to process repo/comps: %s", dirname)
self.yum_process_comps_file(dirname, distro)
matches[dirname] = 1
else:
self.logger.info(
"directory %s is missing xml comps file, skipping", dirname
)
continue
def yum_process_comps_file(self, comps_path: str, distribution: "Distro") -> None:
"""
When importing Fedora/EL certain parts of the install tree can also be used as yum repos containing packages
that might not yet be available via updates in yum. This code identifies those areas. Existing repodata will be
used as-is, but repodate is created for earlier, non-yum based, installers.
:param comps_path: Not know what this is exactly for.
:param distribution: The distributions to check.
"""
if os.path.exists(os.path.join(comps_path, "repodata")):
keeprepodata = True
masterdir = "repodata"
else:
# older distros...
masterdir = "base"
keeprepodata = False
# figure out what our comps file is ...
self.logger.info("looking for %s/%s/*comps*.xml", comps_path, masterdir)
files = glob.glob(f"{comps_path}/{masterdir}/*comps*.xml")
if len(files) == 0:
self.logger.info(
"no comps found here: %s", os.path.join(comps_path, masterdir)
)
return # no comps xml file found
# pull the filename from the longer part
comps_file = files[0].split("/")[-1]
try:
# Store the yum configs on the filesystem so we can use them later. And configure them in the automated
# installation file post section, etc.
counter = len(distribution.source_repos)
# find path segment for yum_url (changing filesystem path to http:// trailing fragment)
seg = comps_path.rfind("distro_mirror")
urlseg = comps_path[(seg + len("distro_mirror") + 1) :]
fname = os.path.join(
self.settings.webdir,
"distro_mirror",
"config",
f"{distribution.name}-{counter}.repo",
)
protocol = self.api.settings().autoinstall_scheme
repo_url = f"{protocol}://@@http_server@@/cobbler/distro_mirror/config/{distribution.name}-{counter}.repo"
repo_url2 = f"{protocol}://@@http_server@@/cobbler/distro_mirror/{urlseg}"
distribution.source_repos.extend([repo_url, repo_url2])
config_dir = os.path.dirname(fname)
if not os.path.exists(config_dir):
os.makedirs(config_dir)
# NOTE: the following file is now a Cheetah template, so it can be remapped during sync, that's why we have
# the @@http_server@@ left as templating magic.
# repo_url2 is actually no longer used. (?)
with open(fname, "w", encoding="UTF-8") as config_file:
config_file.write(f"[core-{counter}]\n")
config_file.write(f"name=core-{counter}\n")
config_file.write(
f"baseurl={protocol}://@@http_server@@/cobbler/distro_mirror/{urlseg}\n"
)
config_file.write("enabled=1\n")
config_file.write("gpgcheck=0\n")
config_file.write("priority=$yum_distro_priority\n")
# Don't run creatrepo twice -- this can happen easily for Xen and PXE, when they'll share same repo files.
if keeprepodata:
self.logger.info("Keeping repodata as-is :%s/repodata", comps_path)
self.found_repos[comps_path] = 1
elif comps_path not in self.found_repos:
utils.remove_yum_olddata(comps_path)
cmd = [
"createrepo",
self.settings.createrepo_flags,
"--groupfile",
os.path.join(comps_path, masterdir, comps_file),
comps_path,
]
utils.subprocess_call(cmd, shell=False)
self.found_repos[comps_path] = 1
# For older distros, if we have a "base" dir parallel with "repodata", we need to copy comps.xml up
# one...
path_1 = os.path.join(comps_path, "repodata", "comps.xml")
path_2 = os.path.join(comps_path, "base", "comps.xml")
if os.path.exists(path_1) and os.path.exists(path_2):
shutil.copyfile(path_1, path_2)
except Exception:
self.logger.error("error launching createrepo (not installed?), ignoring")
utils.log_exc()
# ==========================================================================
# apt-specific
def apt_repo_adder(self, distribution: "Distro") -> None:
"""
Automatically import apt repositories when importing signatures.
:param distribution: The distribution to scan for apt repositories.
"""
self.logger.info("adding apt repo for %s", distribution.name)
# Obtain repo mirror from APT if available
mirror = ""
if APT_AVAILABLE:
# Example returned URL: http://us.archive.ubuntu.com/ubuntu
mirror = self.get_repo_mirror_from_apt()
if not mirror:
mirror = "http://archive.ubuntu.com/ubuntu"
repo = self.api.new_repo()
repo.breed = enums.RepoBreeds.APT
repo.arch = enums.RepoArchs.to_enum(distribution.arch.value)
repo.keep_updated = True
repo.apt_components = "main universe" # TODO: make a setting?
repo.apt_dists = (
f"{distribution.os_version} {distribution.os_version}-updates"
f"{distribution.os_version}-security"
)
repo.name = distribution.name
repo.os_version = distribution.os_version
if distribution.breed == "ubuntu":
repo.mirror = mirror
else:
# NOTE : The location of the mirror should come from timezone
repo.mirror = (
f"http://ftp.{'us'}.debian.org/debian/dists/{distribution.os_version}"
)
self.logger.info("Added repos for %s", distribution.name)
self.api.add_repo(repo)
# FIXME: Add the found/generated repos to the profiles that were created during the import process
def get_repo_mirror_from_apt(self) -> Any:
"""
This tries to determine the apt mirror/archive to use (when processing repos) if the host machine is Debian or
Ubuntu.
:return: False if the try fails or otherwise the mirrors.
"""
try:
sources = sourceslist.SourcesList() # type: ignore
release = debdistro.get_distro() # type: ignore
release.get_sources(sources) # type: ignore
mirrors = release.get_server_list() # type: ignore
for mirror in mirrors: # type: ignore
if mirror[2]:
return mirror[1] # type: ignore
except Exception: # type: ignore
return False
# ==========================================================================
# rhn-specific
@staticmethod
def rhn_repo_adder(distribution: "Distro") -> None:
"""
Not currently used.
:param distribution: Not used currently.
"""
return
# ==========================================================================
# rsync-specific
@staticmethod
def rsync_repo_adder(distribution: "Distro") -> None:
"""
Not currently used.
:param distribution: Not used currently.
"""
return
# ==========================================================================
# windows-specific
def import_winpe(self) -> None:
"""
Preparing a Windows distro for network boot.
"""
winpe_path = self.extract_winpe(self.path)
# For Windows, file paths are case-insensitive within a WinPE image, but for wimlib-utils
# they are not. And wimlib-utils does not provide options for case-insensitive search
# and extraction of files from this image other than setting the WIMLIB_IMAGEX_IGNORE_CASE
# environment variable.
wimdir_result = utils.subprocess_get(
["/usr/bin/wimdir", winpe_path, "1"], shell=False
)
wim_file_list = wimdir_result.split("\n")
file_dict = {
"/windows/boot/pxe/pxeboot.n12": "",
"/windows/boot/pxe/bootmgr.exe": "",
"/windows/boot/efi/bootmgr.efi": "",
"/windows/system32/config/software": "",
}
for wim_file in wim_file_list:
if wim_file.lower() in file_dict:
file_dict[wim_file.lower()] = wim_file
file_list = [
file_dict["/windows/boot/efi/bootmgr.efi"],
file_dict["/windows/system32/config/software"],
]
if "wimboot" not in self.signature["kernel_file"]:
file_list.extend(
[
file_dict["/windows/boot/pxe/pxeboot.n12"],
file_dict["/windows/boot/pxe/bootmgr.exe"],
]
)
self.extract_files_from_wim(winpe_path, file_list)
self.update_winpe(winpe_path, file_dict["/windows/system32/config/software"])
def extract_winpe(self, distro_path: str) -> str:
"""
Extracting winpe.wim from boot.win.
:param distro_path: The directory where the Windows distro is located.
:return: The path to the extracted WinPE image.
:raises CX
"""
bootwim_path = os.path.join(distro_path, "sources", "boot.wim")
dest_path = os.path.join(distro_path, "boot")
if not os.path.exists(bootwim_path):
error_msg = f"{bootwim_path} not found!"
self.logger.error(error_msg)
raise CX(error_msg)
winpe_path = os.path.join(dest_path, "winpe.wim")
if not os.path.exists(dest_path):
filesystem_helpers.mkdir(dest_path)
if os.path.exists(winpe_path):
filesystem_helpers.rmfile(winpe_path)
if (
utils.subprocess_call(
["/usr/bin/wimexport", bootwim_path, "1", winpe_path, "--boot"],
shell=False,
)
!= 0
):
error_msg = f"Cannot extract {winpe_path} from {bootwim_path}!"
self.logger.error(error_msg)
raise CX(error_msg)
return winpe_path
def update_winpe(self, winpe_path: str, win_registry: str) -> None:
"""
Update WinPE image for network boot.
:param winpe_path: The path to WinPE image.
:param win_registry: The path to the Windows registry in the WinPE image.
:raises CX
"""
if not HAS_HIVEX:
error_msg = (
"python3-hivex not found. If you need Automatic Windows "
"Installation support, please install."
)
self.logger.error(error_msg)
raise CX(error_msg)
software = os.path.join(
os.path.dirname(winpe_path), os.path.basename(win_registry).lower()
)
hivex_obj = hivex.Hivex(software, write=True) # type: ignore
nodes: List[Any] = [hivex_obj.root()] # type: ignore
while len(nodes) > 0:
node = nodes.pop()
nodes.extend(hivex_obj.node_children(node)) # type: ignore
self.reg_node_update(hivex_obj, node)
hivex_obj.commit(software) # type: ignore
utils.subprocess_call(
[
"/usr/bin/wimupdate",
winpe_path,
f"--command=add {software} {win_registry}",
],
shell=False,
)
os.remove(software)
def reg_node_update(self, hivex_obj: Any, node: Any) -> None:
"""
Replacement of the substring X:\\$windows.~bt with X: in the Windows registry node.
:param hivex_obj: Hivex object.
:param node: The registry node.
"""
new_values: List[Optional[Dict[str, Any]]] = []
update_flag = False
key_vals: List[Any] = hivex_obj.node_values(node) # type: ignore
pat = "X:\\$windows.~bt"
for key_val in key_vals:
key: str = hivex_obj.value_key(key_val) # type: ignore
val = hivex_obj.node_get_value(node, key) # type: ignore
val_type: int
val_value: bytes
val_type, val_value = hivex_obj.value_value(val) # type: ignore
if pat in key:
key = key.replace(pat, "X:")
update_flag = True
if val_type in (REG_SZ, REG_EXPAND_SZ):
val_string: str = hivex_obj.value_string(val) # type: ignore
if pat in val_string:
val_string = val_string.replace(pat, "X:")
val_value = (val_string + "\0").encode(encoding="utf-16le")
update_flag = True
new_values.append(
{
"key": key,
"t": val_type,
"value": val_value,
}
)
if update_flag:
hivex_obj.node_set_values(node, new_values) # type: ignore
def extract_files_from_wim(self, winpe_path: str, wim_files: List[str]) -> None:
"""
Extracting files from winpe.win.
:param winpe_path: The path to the WinPE image.
:param wim_files: The list of files to extract.
:raises CX
"""
dest_path = os.path.dirname(winpe_path)
cmd_args = [
"/usr/bin/wimextract",
winpe_path,
"1",
]
cmd_args.extend(wim_files)
cmd_args.extend(
[
f"--dest-dir={dest_path}",
"--no-acls",
"--no-attributes",
]
)
if (
utils.subprocess_call(
cmd_args,
shell=False,
)
!= 0
):
error_msg = f'Cannot extract "{wim_files}" files from {winpe_path}!'
self.logger.error(error_msg)
raise CX(error_msg)
for wim_file_path in wim_files:
wim_file = os.path.basename(wim_file_path)
if wim_file != wim_file.lower():
os.rename(
os.path.join(dest_path, wim_file),
os.path.join(dest_path, wim_file.lower()),
)
# ==========================================================================
def get_import_manager(api: "CobblerAPI") -> _ImportSignatureManager:
"""
Get an instance of the import manager which enables you to import various things.
:param api: The API instance of Cobbler
:return: The object to import data with.
"""
# Singleton used, therefore ignoring 'global'
global MANAGER # pylint: disable=global-statement
if not MANAGER:
MANAGER = _ImportSignatureManager(api) # type: ignore
return MANAGER
| 46,980
|
Python
|
.py
| 1,014
| 33.19428
| 120
| 0.546147
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,157
|
in_tftpd.py
|
cobbler_cobbler/cobbler/modules/managers/in_tftpd.py
|
"""
This is some of the code behind 'cobbler sync'.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
import glob
import os.path
import shutil
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
from cobbler import templar, tftpgen, utils
from cobbler.cexceptions import CX
from cobbler.modules.managers import TftpManagerModule
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
from cobbler.items.system import System
MANAGER = None
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "manage"
class _InTftpdManager(TftpManagerModule):
@staticmethod
def what() -> str:
"""
Static method to identify the manager.
:return: Always "in_tftpd".
"""
return "in_tftpd"
def __init__(self, api: "CobblerAPI"):
super().__init__(api)
self.tftpgen = tftpgen.TFTPGen(api)
self.bootloc = api.settings().tftpboot_location
self.webdir = api.settings().webdir
def write_boot_files_distro(self, distro: "Distro") -> int:
"""
TODO
:param distro: TODO
:return: TODO
"""
# Collapse the object down to a rendered datastructure.
# The second argument set to false means we don't collapse dicts/arrays into a flat string.
target = utils.blender(self.api, False, distro)
# Create metadata for the templar function.
# Right now, just using local_img_path, but adding more Cobbler variables here would probably be good.
metadata: Dict[str, Any] = {}
metadata["local_img_path"] = os.path.join(self.bootloc, "images", distro.name)
metadata["web_img_path"] = os.path.join(
self.webdir, "distro_mirror", distro.name
)
# Create the templar instance. Used to template the target directory
templater = templar.Templar(self.api)
# Loop through the dict of boot files, executing a cp for each one
self.logger.info("processing template_files for distro: %s", distro.name)
for boot_file in target["template_files"].keys():
rendered_target_file = templater.render(boot_file, metadata, None)
rendered_source_file = templater.render(
target["template_files"][boot_file], metadata, None
)
file = "" # to prevent unboundlocalerror
filedst = "" # to prevent unboundlocalerror
try:
for file in glob.glob(rendered_source_file):
if file == rendered_source_file:
# this wasn't really a glob, so just copy it as is
filedst = rendered_target_file
else:
# this was a glob, so figure out what the destination file path/name should be
_, tgt_file = os.path.split(file)
rnd_path, _ = os.path.split(rendered_target_file)
filedst = os.path.join(rnd_path, tgt_file)
if not os.path.isdir(rnd_path):
filesystem_helpers.mkdir(rnd_path)
if not os.path.isfile(filedst):
shutil.copyfile(file, filedst)
self.logger.info(
"copied file %s to %s for %s", file, filedst, distro.name
)
except Exception:
self.logger.error(
"failed to copy file %s to %s for %s", file, filedst, distro.name
)
return 0
def write_boot_files(self) -> int:
"""
Copy files in ``profile["template_files"]`` into the TFTP server folder. Used for vmware currently.
:return: ``0`` on success.
"""
for distro in self.distros:
self.write_boot_files_distro(distro)
return 0
def sync_single_system(
self,
system: "System",
menu_items: Optional[Dict[str, Union[str, Dict[str, str]]]] = None,
) -> int:
"""
Write out new ``pxelinux.cfg`` files to the TFTP server folder (or grub/system/<mac> in grub case)
:param system: The system to be added.
:param menu_items: The menu items to add
"""
if not menu_items:
menu_items = self.tftpgen.get_menu_items()
self.tftpgen.write_all_system_files(system, menu_items)
# generate any templates listed in the distro
self.tftpgen.write_templates(system)
return 0
def add_single_distro(self, distro: "Distro") -> None:
"""
TODO
:param distro: TODO
"""
self.tftpgen.copy_single_distro_files(distro, self.bootloc, False)
self.write_boot_files_distro(distro)
def sync_systems(self, systems: List[str], verbose: bool = True) -> None:
"""
Write out specified systems as separate files to the TFTP server folder.
:param systems: List of systems to write PXE configuration files for.
:param verbose: Whether the TFTP server should log this verbose or not.
"""
if not (
isinstance(systems, list) # type: ignore
and all(isinstance(sys_name, str) for sys_name in systems) # type: ignore
):
raise TypeError("systems needs to be a list of strings")
if not isinstance(verbose, bool): # type: ignore
raise TypeError("verbose needs to be of type bool")
system_objs: List["System"] = []
for system_name in systems:
# get the system object:
system_obj = self.api.find_system(name=system_name)
if system_obj is None:
self.logger.info("did not find any system named %s", system_name)
continue
if isinstance(system_obj, list):
raise ValueError("Ambiguous match detected!")
system_objs.append(system_obj)
menu_items = self.tftpgen.get_menu_items()
for system in system_objs:
self.sync_single_system(system, menu_items)
self.logger.info("generating PXE menu structure")
self.tftpgen.make_pxe_menu()
def sync(self) -> int:
"""
Write out all files to /tftpdboot
"""
self.logger.info("copying bootloaders")
self.tftpgen.copy_bootloaders(self.bootloc)
self.logger.info("copying distros to tftpboot")
# Adding in the exception handling to not blow up if files have been moved (or the path references an NFS
# directory that's no longer mounted)
for distro in self.distros:
try:
self.logger.info("copying files for distro: %s", distro.name)
self.tftpgen.copy_single_distro_files(distro, self.bootloc, False)
except CX as cobbler_exception:
self.logger.error(cobbler_exception.value)
self.logger.info("copying images")
self.tftpgen.copy_images()
# the actual pxelinux.cfg files, for each interface
self.logger.info("generating PXE configuration files")
menu_items = self.tftpgen.get_menu_items()
for system in self.systems:
self.tftpgen.write_all_system_files(system, menu_items)
self.logger.info("generating PXE menu structure")
self.tftpgen.make_pxe_menu()
return 0
def get_manager(api: "CobblerAPI") -> _InTftpdManager:
"""
Creates a manager object to manage an in_tftp server.
:param api: The API which holds all information in the current Cobbler instance.
:return: The object to manage the server with.
"""
# Singleton used, therefore ignoring 'global'
global MANAGER # pylint: disable=global-statement
if not MANAGER:
MANAGER = _InTftpdManager(api) # type: ignore
return MANAGER
| 7,978
|
Python
|
.py
| 179
| 34.391061
| 113
| 0.615008
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,158
|
ldap.py
|
cobbler_cobbler/cobbler/modules/authentication/ldap.py
|
"""
Authentication module that uses ldap
Settings in /etc/cobbler/authn_ldap.conf
Choice of authentication module is in /etc/cobbler/modules.conf
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# We need to ignore this due to the ldap bindings not being type annotated and also a C library at the same time.
# pylint: disable=no-member
import traceback
from typing import TYPE_CHECKING
from cobbler import enums
from cobbler.cexceptions import CX
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
The mandatory Cobbler module registration hook.
:return: Always "authn"
"""
return "authn"
def authenticate(api_handle: "CobblerAPI", username: str, password: str) -> bool:
"""
Validate an LDAP bind, returning whether the authentication was successful or not.
:param api_handle: The api instance to resolve settings.
:param username: The username to authenticate.
:param password: The password to authenticate.
:return: True if the ldap server authentication was a success, otherwise false.
:raises CX: Raised in case the LDAP search bind credentials are missing in the settings.
"""
if not password:
return False
import ldap # type: ignore
server = api_handle.settings().ldap_server
basedn = api_handle.settings().ldap_base_dn
port = str(api_handle.settings().ldap_port)
prefix = api_handle.settings().ldap_search_prefix
# Support for LDAP client certificates
tls = api_handle.settings().ldap_tls
tls_cacertdir = api_handle.settings().ldap_tls_cacertfile
tls_cacertfile = api_handle.settings().ldap_tls_cacertfile
tls_keyfile = api_handle.settings().ldap_tls_keyfile
tls_certfile = api_handle.settings().ldap_tls_certfile
tls_cipher_suite = api_handle.settings().ldap_tls_cipher_suite
tls_reqcert = api_handle.settings().ldap_tls_reqcert
# allow multiple servers split by a space
if server.find(" "):
servers = server.split()
else:
servers = [server]
# to get ldap working with Active Directory
ldap.set_option(ldap.OPT_REFERRALS, 0) # type: ignore
uri = ""
for server in servers:
# form our ldap uri based on connection port
if port == "389":
uri += "ldap://" + server
elif port == "636":
uri += "ldaps://" + server
elif port == "3269":
uri += "ldaps://" + f"{server}:{port}"
else:
uri += "ldap://" + f"{server}:{port}"
uri += " "
uri = uri.strip()
# connect to LDAP host
directory = ldap.initialize(uri) # type: ignore
if port in ("636", "3269"):
ldaps_tls = ldap
else:
ldaps_tls = directory
if tls or port in ("636", "3269"):
if tls_cacertdir:
ldaps_tls.set_option(ldap.OPT_X_TLS_CACERTDIR, tls_cacertdir) # type: ignore
if tls_cacertfile:
ldaps_tls.set_option(ldap.OPT_X_TLS_CACERTFILE, tls_cacertfile) # type: ignore
if tls_keyfile:
ldaps_tls.set_option(ldap.OPT_X_TLS_KEYFILE, tls_keyfile) # type: ignore
if tls_certfile:
ldaps_tls.set_option(ldap.OPT_X_TLS_CERTFILE, tls_certfile) # type: ignore
if tls_reqcert:
req_cert: enums.TlsRequireCert = enums.TlsRequireCert.to_enum(tls_reqcert)
reqcert_types = { # type: ignore
enums.TlsRequireCert.NEVER: ldap.OPT_X_TLS_NEVER, # type: ignore
enums.TlsRequireCert.ALLOW: ldap.OPT_X_TLS_ALLOW, # type: ignore
enums.TlsRequireCert.DEMAND: ldap.OPT_X_TLS_DEMAND, # type: ignore
enums.TlsRequireCert.HARD: ldap.OPT_X_TLS_HARD, # type: ignore
}
ldaps_tls.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, reqcert_types[req_cert]) # type: ignore
if tls_cipher_suite:
ldaps_tls.set_option(ldap.OPT_X_TLS_CIPHER_SUITE, tls_cipher_suite) # type: ignore
# start_tls if tls is 'on', 'true' or 'yes' and we're not already using old-SSL
if port not in ("636", "3269"):
if tls:
try:
directory.set_option(ldap.OPT_X_TLS_NEWCTX, 0) # type: ignore
directory.start_tls_s()
except Exception:
traceback.print_exc()
return False
else:
ldap.set_option(ldap.OPT_X_TLS_NEWCTX, 0) # type: ignore
# if we're not allowed to search anonymously, grok the search bind settings and attempt to bind
if not api_handle.settings().ldap_anonymous_bind:
searchdn = api_handle.settings().ldap_search_bind_dn
searchpw = api_handle.settings().ldap_search_passwd
if searchdn == "" or searchpw == "":
raise CX("Missing search bind settings")
try:
directory.simple_bind_s(searchdn, searchpw) # type: ignore
except Exception:
traceback.print_exc()
return False
# perform a subtree search in basedn to find the full dn of the user
# TODO: what if username is a CN? maybe it goes into the config file as well?
ldap_filter = prefix + username
result = directory.search_s(basedn, ldap.SCOPE_SUBTREE, ldap_filter, []) # type: ignore
if result:
for ldap_dn, _ in result: # type: ignore
# username _should_ be unique so we should only have one result ignore entry; we don't need it
pass
else:
return False
try:
# attempt to bind as the user
directory.simple_bind_s(ldap_dn, password) # type: ignore
directory.unbind()
return True
except Exception:
# traceback.print_exc()
return False
| 5,840
|
Python
|
.py
| 132
| 36.575758
| 113
| 0.65
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,159
|
configfile.py
|
cobbler_cobbler/cobbler/modules/authentication/configfile.py
|
"""
Authentication module that uses /etc/cobbler/auth.conf
Choice of authentication module is in /etc/cobbler/modules.conf
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import hashlib
import os
from typing import TYPE_CHECKING, List
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def hashfun(api: "CobblerAPI", text: str) -> str:
"""
Converts a str object to a hash which was configured in modules.conf of the Cobbler settings.
:param api: CobblerAPI
:param text: The text to hash.
:return: The hash of the text. This should output the same hash when entered the same text.
"""
hashfunction = (
api.settings()
.modules.get("authentication", {})
.get("hash_algorithm", "sha3_512")
)
if hashfunction == "sha3_224":
hashalgorithm = hashlib.sha3_224(text.encode("utf-8"))
elif hashfunction == "sha3_384":
hashalgorithm = hashlib.sha3_384(text.encode("utf-8"))
elif hashfunction == "sha3_256":
hashalgorithm = hashlib.sha3_256(text.encode("utf-8"))
elif hashfunction == "sha3_512":
hashalgorithm = hashlib.sha3_512(text.encode("utf-8"))
elif hashfunction == "blake2b":
hashalgorithm = hashlib.blake2b(text.encode("utf-8"))
elif hashfunction == "blake2s":
hashalgorithm = hashlib.blake2s(text.encode("utf-8"))
elif hashfunction == "shake_128":
hashalgorithm = hashlib.shake_128(text.encode("utf-8"))
elif hashfunction == "shake_256":
hashalgorithm = hashlib.shake_256(text.encode("utf-8"))
else:
errortext = f"The hashfunction (Currently: {hashfunction}) must be one of the defined in the settings!"
raise ValueError(errortext)
# FIXME: Add case for SHAKE
return hashalgorithm.hexdigest() # type: ignore
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "authn"
def __parse_storage() -> List[List[str]]:
"""
Parse the users.digest file and return all users.
:return: A list of all users. A user is a sublist which has three elements: username, realm and passwordhash.
"""
if not os.path.exists("/etc/cobbler/users.digest"):
return []
with open("/etc/cobbler/users.digest", encoding="utf-8") as users_digest_fd:
data = users_digest_fd.read()
results: List[List[str]] = []
lines = data.split("\n")
for line in lines:
try:
line = line.strip()
tokens = line.split(":")
results.append([tokens[0], tokens[1], tokens[2]])
except Exception:
pass
return results
def authenticate(api_handle: "CobblerAPI", username: str, password: str) -> bool:
"""
Validate a username/password combo.
Thanks to https://trac.edgewall.org/ticket/845 for supplying the algorithm info.
:param api_handle: Unused in this implementation.
:param username: The username to log in with. Must be contained in /etc/cobbler/users.digest
:param password: The password to log in with. Must be contained hashed in /etc/cobbler/users.digest
:return: A boolean which contains the information if the username/password combination is correct.
"""
userlist = __parse_storage()
for user, realm, passwordhash in userlist:
if user == username and realm == "Cobbler":
calculated_passwordhash = hashfun(api_handle, password)
if calculated_passwordhash == passwordhash:
return True
return False
| 3,649
|
Python
|
.py
| 85
| 36.882353
| 113
| 0.67832
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,160
|
pam.py
|
cobbler_cobbler/cobbler/modules/authentication/pam.py
|
"""
Authentication module that uses /etc/cobbler/auth.conf
Choice of authentication module is in /etc/cobbler/modules.conf
PAM python code based on the pam_python code created by Chris AtLee:
https://atlee.ca/software/pam/
#-----------------------------------------------
pam_python (c) 2007 Chris AtLee <chris@atlee.ca>
Licensed under the MIT license:
https://www.opensource.org/licenses/mit-license.php
PAM module for python
Provides an authenticate function that will allow the caller to authenticate
a user against the Pluggable Authentication Modules (PAM) on the system.
Implemented using ctypes, so no compilation is necessary.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# FIXME: Move to the dedicated library python-pam
from ctypes import (
CDLL,
CFUNCTYPE,
POINTER,
Structure,
c_char,
c_char_p,
c_int,
c_uint,
c_void_p,
cast,
pointer,
sizeof,
)
from ctypes.util import find_library
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
LIBPAM = CDLL(find_library("pam"))
LIBC = CDLL(find_library("c"))
CALLOC = LIBC.calloc
CALLOC.restype = c_void_p
CALLOC.argtypes = [c_uint, c_uint]
STRDUP = LIBC.strdup
STRDUP.argstypes = [c_char_p] # type: ignore
STRDUP.restype = POINTER(c_char) # NOT c_char_p !!!!
# Various constants
PAM_PROMPT_ECHO_OFF = 1
PAM_PROMPT_ECHO_ON = 2
PAM_ERROR_MSG = 3
PAM_TEXT_INFO = 4
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "authn"
class PamHandle(Structure):
"""
wrapper class for pam_handle_t
"""
_fields_ = [("handle", c_void_p)]
def __init__(self):
Structure.__init__(self)
self.handle = 0
class PamMessage(Structure):
"""
wrapper class for pam_message structure
"""
_fields_ = [("msg_style", c_int), ("msg", c_char_p)]
def __repr__(self):
return f"<PamMessage {self.msg_style:d} '{self.msg}'>"
class PamResponse(Structure):
"""
wrapper class for pam_response structure
"""
_fields_ = [("resp", c_char_p), ("resp_retcode", c_int)]
def __repr__(self):
return f"<PamResponse {self.resp_retcode:d} '{self.resp}'>"
CONV_FUNC = CFUNCTYPE(
c_int, c_int, POINTER(POINTER(PamMessage)), POINTER(POINTER(PamResponse)), c_void_p
)
class PamConv(Structure):
"""
wrapper class for pam_conv structure
"""
_fields_ = [("conv", CONV_FUNC), ("appdata_ptr", c_void_p)]
PAM_START = LIBPAM.pam_start
PAM_START.restype = c_int
PAM_START.argtypes = [c_char_p, c_char_p, POINTER(PamConv), POINTER(PamHandle)]
PAM_AUTHENTICATE = LIBPAM.pam_authenticate
PAM_AUTHENTICATE.restype = c_int
PAM_AUTHENTICATE.argtypes = [PamHandle, c_int]
PAM_ACCT_MGMT = LIBPAM.pam_acct_mgmt
PAM_ACCT_MGMT.restype = c_int
PAM_ACCT_MGMT.argtypes = [PamHandle, c_int]
def authenticate(api_handle: "CobblerAPI", username: str, password: str) -> bool:
"""
Validate PAM authentication, returning whether the authentication was successful or not.
:param api_handle: Used for resolving the pam service name and getting the Logger.
:param username: The username to log in with.
:param password: The password to log in with.
:returns: True if the given username and password authenticate for the given service. Otherwise False
"""
@CONV_FUNC
def my_conv(n_messages, messages, p_response, app_data): # type: ignore
"""
Simple conversation function that responds to any prompt where the echo is off with the supplied password
"""
# Create an array of n_messages response objects
addr = CALLOC(n_messages, sizeof(PamResponse))
p_response[0] = cast(addr, POINTER(PamResponse))
for i in range(n_messages): # type: ignore
if messages[i].contents.msg_style == PAM_PROMPT_ECHO_OFF: # type: ignore
pw_copy = STRDUP(password.encode())
p_response.contents[i].resp = cast(pw_copy, c_char_p) # type: ignore
p_response.contents[i].resp_retcode = 0 # type: ignore
return 0
try:
service = api_handle.settings().authn_pam_service
except Exception:
service = "login"
api_handle.logger.debug(f"authn_pam: PAM service is {service}")
handle = PamHandle()
conv = PamConv(my_conv, 0)
retval = PAM_START(
service.encode(), username.encode(), pointer(conv), pointer(handle)
)
if retval != 0:
# TODO: This is not an authentication error, something has gone wrong starting up PAM
api_handle.logger.error("authn_pam: error initializing PAM library")
return False
retval = PAM_AUTHENTICATE(handle, 0)
if retval == 0:
retval = PAM_ACCT_MGMT(handle, 0)
return retval == 0
| 4,938
|
Python
|
.py
| 133
| 32.451128
| 113
| 0.682717
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,161
|
denyall.py
|
cobbler_cobbler/cobbler/modules/authentication/denyall.py
|
"""
Authentication module that denies everything.
Used to disable the WebUI by default.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "authn"
def authenticate(api_handle: "CobblerAPI", username: str, password: str) -> bool:
"""
Validate a username/password combo, always returning false.
:returns: False
"""
return False
| 681
|
Python
|
.py
| 21
| 29.142857
| 81
| 0.742331
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,162
|
__init__.py
|
cobbler_cobbler/cobbler/modules/authentication/__init__.py
|
"""
This module represents all Cobbler methods of authentication. All present modules may be used through the configuration
file ``modules.conf`` normally found at ``/etc/cobbler/``.
In the following the specification of an authentication module is given:
#. The name of the only public method - except the generic ``register()`` method - must be ``authenticate``
#. The attributes are - in exactly this order: ``api_handle``, ``username``, ``password``
#. The username and password both must be of type ``str``.
#. The ``api_handle`` must be the main ``CobblerAPI`` instance.
#. The return value of the module must be a ``bool``.
#. The method should only return ``True`` in case the authentication is successful.
#. Errors should result in the return of ``False`` and a log message to the standard Python logger obtioned via
``logging.getLogger()``.
#. The return value of ``register()`` must be ``authn``.
The list of currently known authentication modules is:
- authentication.configfile
- authentication.denyall
- authentication.ldap
- authentication.pam
- authentication.passthru
- authentication.spacewalk
"""
| 1,125
|
Python
|
.py
| 21
| 52.238095
| 119
| 0.754545
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,163
|
passthru.py
|
cobbler_cobbler/cobbler/modules/authentication/passthru.py
|
"""
Authentication module that defers to Apache and trusts
what Apache trusts.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
from typing import TYPE_CHECKING
from cobbler import utils
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
The mandatory Cobbler module registration hook.
:return: Always "authn"
"""
return "authn"
def authenticate(api_handle: "CobblerAPI", username: str, password: str) -> bool:
"""
Validate a username/password combo. Uses cobbler_auth_helper
:param api_handle: This parameter is not used currently.
:param username: This parameter is not used currently.
:param password: This should be the internal Cobbler secret.
:return: True if the password is the secret, otherwise false.
"""
return password == utils.get_shared_secret()
| 992
|
Python
|
.py
| 26
| 34.615385
| 81
| 0.746862
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,164
|
spacewalk.py
|
cobbler_cobbler/cobbler/modules/authentication/spacewalk.py
|
"""
Authentication module that uses Spacewalk's auth system.
Any org_admin or kickstart_admin can get in.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
from typing import TYPE_CHECKING
from xmlrpc.client import Error, ServerProxy
from cobbler.cexceptions import CX
from cobbler.utils import log_exc
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "authn"
def __looks_like_a_token(password: str) -> bool:
"""
What spacewalk sends us could be an internal token or it could be a password if it's long and lowercase hex, it's
/likely/ a token, and we should try to treat it as a token first, if not, we should treat it as a password. All of
this code is there to avoid extra XMLRPC calls, which are slow.
A password gets detected as a token if it is lowercase and under 45 characters.
:param password: The password which is possibly a token.
:return: True if it is possibly a token or False otherwise.
"""
return password.lower() == password and len(password) > 45
def __check_auth_token(
xmlrpc_client: "ServerProxy", api_handle: "CobblerAPI", username: str, password: str
):
"""
This checks if the auth token is valid.
:param xmlrpc_client: The xmlrpc client to check access for.
:param api_handle: The api instance to retrieve settings of.
:param username: The username to try.
:param password: The password to try.
:return: In any error case this will return 0. Otherwise, the return value of the API which should be 1.
"""
# If the token is not a token this will raise an exception rather than return an integer.
try:
return xmlrpc_client.auth.checkAuthToken(username, password)
except Error:
logger = api_handle.logger
logger.error("Error while checking authentication token.")
log_exc()
return False
def __check_user_login(
xmlrpc_client: ServerProxy,
api_handle: "CobblerAPI",
user_enabled: bool,
username: str,
password: str,
) -> bool:
"""
This actually performs the login to spacewalk.
:param xmlrpc_client: The xmlrpc client bound to the target spacewalk instance.
:param api_handle: The api instance to retrieve settings of.
:param user_enabled: Weather we allow Spacewalk users to log in or not.
:param username: The username to log in.
:param password: The password to log in.
:return: True if users are allowed to log in and he is of the role ``config_admin`` or ``org_admin``.
"""
logger = api_handle.logger
try:
session = xmlrpc_client.auth.login(username, password)
# login success by username, role must also match and user_enabled needs to be true.
roles = xmlrpc_client.user.listRoles(session, username)
if not isinstance(roles, list):
# FIXME: Double check what the actual API is for this!
logger.warning("Uyuni/SUMA returned roles not as a list!")
return False
if user_enabled and ("config_admin" in roles or "org_admin" in roles):
return True
except Error:
logger.error("Error while checking user authentication data.")
log_exc()
return False
def authenticate(api_handle: "CobblerAPI", username: str, password: str) -> bool:
# pylint: disable=line-too-long
"""
Validate a username/password combo. This will pass the username and password back to Spacewalk to see if this
authentication request is valid.
See also: https://github.com/uyuni-project/uyuni/blob/c9b7285117822af96c223cb0b6e0ae96ec7f0837/java/code/src/com/redhat/rhn/frontend/xmlrpc/auth/AuthHandler.java#L107
:param api_handle: The api instance to retrieve settings of.
:param username: The username to authenticate against spacewalk/uyuni/SUSE Manager
:param password: The password to authenticate against spacewalk/uyuni/SUSE Manager
:return: True if it succeeded, False otherwise.
:raises CX: Raised in case ``api_handle`` is missing.
"""
# pylint: enable=line-too-long
if api_handle is None: # type: ignore[reportUnnecessaryComparison]
raise CX("api_handle required. Please don't call this without it.")
server = "https://" + api_handle.settings().redhat_management_server
if api_handle.settings().uyuni_authentication_endpoint:
server = api_handle.settings().uyuni_authentication_endpoint
user_enabled = api_handle.settings().redhat_management_permissive
spacewalk_url = f"{server}/rpc/api"
with ServerProxy(spacewalk_url, verbose=True) as client:
if username == "taskomatic_user" or __looks_like_a_token(password):
# The tokens are lowercase hex, but a password can also be lowercase hex, so we have to try it as both a
# token and then a password if we are unsure. We do it this way to be faster but also to avoid any login
# failed stuff in the logs that we don't need to send.
# Problem at this point, 0xdeadbeef is valid as a token but if that fails, it's also a valid password, so we
# must try auth system #2
if __check_auth_token(client, api_handle, username, password) != 1:
return __check_user_login(
client, api_handle, user_enabled, username, password
)
return True
# It's an older version of spacewalk, so just try the username/pass.
# OR: We know for sure it's not a token because it's not lowercase hex.
return __check_user_login(client, api_handle, user_enabled, username, password)
| 5,821
|
Python
|
.py
| 113
| 45.19469
| 170
| 0.704206
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,165
|
post_power.py
|
cobbler_cobbler/cobbler/modules/installation/post_power.py
|
"""
Post install trigger for Cobbler to power cycle the guest if needed
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2010 Bill Peck <bpeck@redhat.com>
import time
from threading import Thread
from typing import TYPE_CHECKING, List
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.system import System
class RebootSystemThread(Thread):
"""
TODO
"""
def __init__(self, api: "CobblerAPI", target: "System"):
Thread.__init__(self)
self.api = api
self.target = target
def run(self) -> None:
time.sleep(30)
self.api.power_system(self.target, "reboot")
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
# this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
# the return of this method indicates the trigger type
return "/var/lib/cobbler/triggers/install/post/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
Obligatory trigger hook.
:param api: The api to resolve information with.
:param args: This is an array containing two objects.
0: The str "system". All other content will result in an early exit of the trigger.
1: The name of the target system.
:return: ``0`` on success.
"""
objtype = args[0]
name = args[1]
if objtype == "system":
target = api.find_system(name)
else:
return 0
if isinstance(target, list):
raise ValueError("Ambigous match for search result!")
if target and "postreboot" in target.autoinstall_meta:
# Run this in a thread so the system has a chance to finish and umount the filesystem
current = RebootSystemThread(api, target)
current.start()
return 0
| 1,851
|
Python
|
.py
| 51
| 30.54902
| 100
| 0.673206
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,166
|
pre_puppet.py
|
cobbler_cobbler/cobbler/modules/installation/pre_puppet.py
|
"""
This module removes puppet certs from the puppet master prior to
reinstalling a machine if the puppet master is running on the Cobbler
server.
Based on:
https://www.ithiriel.com/content/2010/03/29/writing-install-triggers-cobbler
"""
import logging
import re
from typing import TYPE_CHECKING, List
from cobbler import utils
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
logger = logging.getLogger()
def register() -> str:
"""
This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
indicates the trigger type.
:return: Always `/var/lib/cobbler/triggers/install/pre/*`
"""
return "/var/lib/cobbler/triggers/install/pre/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
This method runs the trigger, meaning in this case that old puppet certs are automatically removed via puppetca.
The list of args should have two elements:
- 0: system or profile
- 1: the name of the system or profile
:param api: The api to resolve external information with.
:param args: Already described above.
:return: "0" on success. If unsuccessful this raises an exception.
"""
objtype = args[0]
name = args[1]
if objtype != "system":
return 0
settings = api.settings()
if not settings.puppet_auto_setup:
return 0
if not settings.remove_old_puppet_certs_automatically:
return 0
system = api.find_system(name)
if system is None or isinstance(system, list):
raise ValueError("Ambigous search match detected!")
blended_system = utils.blender(api, False, system)
hostname = blended_system["hostname"]
if not re.match(r"[\w-]+\..+", hostname):
search_domains = blended_system["name_servers_search"]
if search_domains:
hostname += "." + search_domains[0]
if not re.match(r"[\w-]+\..+", hostname):
default_search_domains = blended_system["default_name_servers_search"]
if default_search_domains:
hostname += "." + default_search_domains[0]
puppetca_path = settings.puppetca_path
cmd = [puppetca_path, "cert", "clean", hostname]
return_code = 0
try:
return_code = utils.subprocess_call(cmd, shell=False)
except Exception:
logger.warning("failed to execute %s", puppetca_path)
if return_code != 0:
logger.warning("puppet cert removal for %s failed", name)
return 0
| 2,492
|
Python
|
.py
| 63
| 34.063492
| 117
| 0.689111
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,167
|
post_puppet.py
|
cobbler_cobbler/cobbler/modules/installation/post_puppet.py
|
"""
This module signs newly installed client puppet certificates if the
puppet master server is running on the same machine as the Cobbler
server.
Based on:
https://www.ithiriel.com/content/2010/03/29/writing-install-triggers-cobbler
"""
import logging
import re
from typing import TYPE_CHECKING, List
from cobbler import utils
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
logger = logging.getLogger()
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
# this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
# the return of this method indicates the trigger type
return "/var/lib/cobbler/triggers/install/post/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
The obligatory Cobbler modules hook.
:param api: The api to resolve all information with.
:param args: This is an array with two items. The first must be ``system``, if the value is different we do an
early and the second is the name of this system or profile.
:return: ``0`` or nothing.
"""
objtype = args[0]
name = args[1]
if objtype != "system":
return 0
settings = api.settings()
if not settings.puppet_auto_setup:
return 0
if not settings.sign_puppet_certs_automatically:
return 0
system = api.find_system(name)
if system is None or isinstance(system, list):
raise ValueError("Ambigous search match!")
blendered_system = utils.blender(api, False, system)
hostname = blendered_system["hostname"]
if not re.match(r"[\w-]+\..+", hostname):
search_domains = blendered_system["name_servers_search"]
if search_domains:
hostname += "." + search_domains[0]
puppetca_path = settings.puppetca_path
cmd = [puppetca_path, "cert", "sign", hostname]
return_code = 0
try:
return_code = utils.subprocess_call(cmd, shell=False)
except Exception:
logger.warning("failed to execute %s", puppetca_path)
if return_code != 0:
logger.warning("signing of puppet cert for %s failed", name)
return 0
| 2,164
|
Python
|
.py
| 57
| 32.666667
| 114
| 0.689986
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,168
|
post_log.py
|
cobbler_cobbler/cobbler/modules/installation/post_log.py
|
"""
Cobbler Module Trigger that will mark a system as installed in ``cobbler status``.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2008-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import time
from typing import TYPE_CHECKING, List
from cobbler import validate
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
# this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
# the return of this method indicates the trigger type
return "/var/lib/cobbler/triggers/install/post/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
The method runs the trigger, meaning this logs that an installation has ended.
The list of args should have three elements:
- 0: system or profile
- 1: the name of the system or profile
- 2: the ip or a "?"
:param api: This parameter is unused currently.
:param args: An array of three elements. Type (system/profile), name and ip. If no ip is present use a ``?``.
:return: Always 0
"""
objtype = args[0]
name = args[1]
ip_address = args[2]
if not validate.validate_obj_type(objtype):
return 1
if not api.find_items(objtype, name=name, return_list=False):
return 1
if not (
ip_address == "?"
or validate.ipv4_address(ip_address)
or validate.ipv6_address(ip_address)
):
return 1
# FIXME: use the logger
with open("/var/log/cobbler/install.log", "a", encoding="UTF-8") as install_log_fd:
install_log_fd.write(f"{objtype}\t{name}\t{ip_address}\tstop\t{time.time()}\n")
return 0
| 1,803
|
Python
|
.py
| 46
| 34.021739
| 113
| 0.681792
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,169
|
pre_log.py
|
cobbler_cobbler/cobbler/modules/installation/pre_log.py
|
"""
TODO
"""
import time
from typing import TYPE_CHECKING, List
from cobbler import validate
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
indicates the trigger type.
:return: Always `/var/lib/cobbler/triggers/install/pre/*`
"""
return "/var/lib/cobbler/triggers/install/pre/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
The method runs the trigger, meaning this logs that an installation has started.
The list of args should have three elements:
- 0: system or profile
- 1: the name of the system or profile
- 2: the ip or a "?"
:param api: This parameter is currently unused.
:param args: Already described above.
:return: A "0" on success.
"""
objtype = args[0]
name = args[1]
ip_address = args[2]
if not validate.validate_obj_type(objtype):
return 1
if not api.find_items(objtype, name=name, return_list=False):
return 1
if not (
ip_address == "?"
or validate.ipv4_address(ip_address)
or validate.ipv6_address(ip_address)
):
return 1
# FIXME: use the logger
with open("/var/log/cobbler/install.log", "a", encoding="UTF-8") as install_log_fd:
install_log_fd.write(f"{objtype}\t{name}\t{ip_address}\tstart\t{time.time()}\n")
return 0
| 1,489
|
Python
|
.py
| 43
| 29.162791
| 117
| 0.660839
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,170
|
__init__.py
|
cobbler_cobbler/cobbler/modules/installation/__init__.py
|
"""
This module contains Python triggers for Cobbler.
With Cobbler one is able to add custom actions and commands after many events happening in Cobbler. The Python modules
presented here are an example of what can be done after certain events. Custom triggers may be added in any language as
long as Cobbler is allowed to execute them. If implemented in Python they need to follow the following specification:
- Expose a method called ``register()`` which returns a ``str`` and returns the path of the trigger in the filesystem.
- Expose a method called ``run(api, args)`` of type ``int``. The integer would represent the exit status of an e.g.
shell script. Thus 0 means success and anything else a failure.
"""
| 717
|
Python
|
.py
| 9
| 78.333333
| 119
| 0.782178
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,171
|
post_report.py
|
cobbler_cobbler/cobbler/modules/installation/post_report.py
|
"""
Post install trigger for Cobbler to send out a pretty email report that contains target information.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2008-2009 Bill Peck <bpeck@redhat.com>
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import smtplib
from builtins import str
from typing import TYPE_CHECKING, List
from cobbler import templar, utils
from cobbler.cexceptions import CX
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
The mandatory Cobbler module registration hook.
:return: Always ``/var/lib/cobbler/triggers/install/post/*``.
"""
# this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
# the return of this method indicates the trigger type
return "/var/lib/cobbler/triggers/install/post/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
This is the mandatory Cobbler module run trigger hook.
:param api: The api to resolve information with.
:param args: This is an array with three elements.
0: "system" or "profile"
1: name of target or profile
2: ip or "?"
:return: ``0`` or ``1``.
:raises CX: Raised if the blender result is empty.
"""
# FIXME: make everything use the logger
settings = api.settings()
# go no further if this feature is turned off
if not settings.build_reporting_enabled:
return 0
objtype = args[0]
name = args[1]
boot_ip = args[2]
if objtype == "system":
target = api.find_system(name)
elif objtype == "profile":
target = api.find_profile(name)
else:
return 1
if target is None or isinstance(target, list):
raise ValueError("Error retrieving system/profile.")
# collapse the object down to a rendered datastructure
target = utils.blender(api, False, target)
if target == {}:
raise CX("failure looking up target")
to_addr = settings.build_reporting_email
if len(to_addr) < 1:
return 0
# add the ability to specify an MTA for servers that don't run their own
smtp_server = settings.build_reporting_smtp_server
if smtp_server == "":
smtp_server = "localhost"
# use a custom from address or fall back to a reasonable default
from_addr = settings.build_reporting_sender
if from_addr == "":
from_addr = f"cobbler@{settings.server}"
subject = settings.build_reporting_subject
if subject == "":
subject = "[Cobbler] install complete "
to_addr = ",".join(to_addr)
metadata = {
"from_addr": from_addr,
"to_addr": to_addr,
"subject": subject,
"boot_ip": boot_ip,
}
metadata.update(target)
with open(
"/etc/cobbler/reporting/build_report_email.template", encoding="UTF-8"
) as input_template:
input_data = input_template.read()
message = templar.Templar(api).render(input_data, metadata, None)
sendmail = True
for prefix in settings.build_reporting_ignorelist:
if prefix != "" and name.startswith(prefix):
sendmail = False
if sendmail:
# Send the mail
# FIXME: on error, return non-zero
server_handle = smtplib.SMTP(smtp_server)
server_handle.sendmail(from_addr, to_addr.split(","), message)
server_handle.quit()
return 0
| 3,493
|
Python
|
.py
| 90
| 32.144444
| 100
| 0.657879
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,172
|
pre_clear_anamon_logs.py
|
cobbler_cobbler/cobbler/modules/installation/pre_clear_anamon_logs.py
|
"""
Cobbler Module Trigger that will clear the anamon logs.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2008-2009, Red Hat Inc.
# SPDX-FileCopyrightText: James Laska <jlaska@redhat.com>
# SPDX-FileCopyrightText: Bill Peck <bpeck@redhat.com>
import glob
import logging
import os
from typing import TYPE_CHECKING, List
from cobbler.cexceptions import CX
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
PATH_PREFIX = "/var/log/cobbler/anamon/"
logger = logging.getLogger()
def register() -> str:
"""
This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
indicates the trigger type.
:return: Always ``/var/lib/cobbler/triggers/install/pre/*``
"""
return "/var/lib/cobbler/triggers/install/pre/*"
def run(api: "CobblerAPI", args: List[str]) -> int:
"""
The list of args should have one element:
- 1: the name of the system or profile
:param api: The api to resolve metadata with.
:param args: This should be a list as described above.
:return: "0" on success.
:raises CX: Raised in case of missing arguments.
"""
if len(args) < 3:
raise CX("invalid invocation")
name = args[1]
settings = api.settings()
# Remove any files matched with the given glob pattern
def unlink_files(globex: str) -> None:
for file in glob.glob(globex):
if os.path.isfile(file):
filesystem_helpers.rmfile(file)
if settings.anamon_enabled:
dirname = os.path.join(PATH_PREFIX, name)
if os.path.isdir(dirname):
unlink_files(os.path.join(dirname, "*"))
logger.info('Cleared Anamon logs for "%s".', name)
return 0
| 1,814
|
Python
|
.py
| 48
| 32.916667
| 117
| 0.697368
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,173
|
allowall.py
|
cobbler_cobbler/cobbler/modules/authorization/allowall.py
|
"""
Authorization module that allows everything, which is the default for new Cobbler installs.
"""
from typing import TYPE_CHECKING, Any
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
def register() -> str:
"""
The mandatory Cobbler module registration hook.
:return: Always "authz"
"""
return "authz"
def authorize(
api_handle: "CobblerAPI",
user: str,
resource: str,
arg1: Any = None,
arg2: Any = None,
) -> int:
"""
Validate a user against a resource.
NOTE: acls are not enforced as there is no group support in this module
:param api_handle: This parameter is not used currently.
:param user: This parameter is not used currently.
:param resource: This parameter is not used currently.
:param arg1: This parameter is not used currently.
:param arg2: This parameter is not used currently.
:return: Always ``1``
"""
return 1
| 1,113
|
Python
|
.py
| 33
| 29.787879
| 91
| 0.716153
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,174
|
configfile.py
|
cobbler_cobbler/cobbler/modules/authorization/configfile.py
|
"""
Authorization module that allow users listed in
/etc/cobbler/users.conf to be permitted to access resources.
For instance, when using authz_ldap, you want to use authn_configfile,
not authz_allowall, which will most likely NOT do what you want.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import os
from configparser import ConfigParser
from typing import TYPE_CHECKING, Any, Dict
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
CONFIG_FILE = "/etc/cobbler/users.conf"
def register() -> str:
"""
The mandatory Cobbler module registration hook.
:return: Always "authz".
"""
return "authz"
def __parse_config() -> Dict[str, Dict[Any, Any]]:
"""
Parse the the users.conf file.
:return: The data of the config file.
"""
if not os.path.exists(CONFIG_FILE):
return {}
config = ConfigParser()
config.read(CONFIG_FILE)
alldata: Dict[str, Dict[str, Any]] = {}
groups = config.sections()
for group in groups:
alldata[str(group)] = {}
options = config.options(group)
for option in options:
alldata[group][option] = 1
return alldata
def authorize(
api_handle: "CobblerAPI",
user: str,
resource: str,
arg1: Any = None,
arg2: Any = None,
) -> int:
"""
Validate a user against a resource. All users in the file are permitted by this module.
:param api_handle: This parameter is not used currently.
:param user: The user to authorize.
:param resource: This parameter is not used currently.
:param arg1: This parameter is not used currently.
:param arg2: This parameter is not used currently.
:return: "0" if no authorized, "1" if authorized.
"""
# FIXME: this must be modified to use the new ACL engine
data = __parse_config()
for _, group_data in data.items():
if user.lower() in group_data:
return 1
return 0
| 2,066
|
Python
|
.py
| 60
| 29.766667
| 91
| 0.687437
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,175
|
__init__.py
|
cobbler_cobbler/cobbler/modules/authorization/__init__.py
|
"""
This module represents all Cobbler methods of authorization. All present modules may be used through the configuration
file ``modules.conf`` normally found at ``/etc/cobbler/``.
In the following the specification of an authorization module is given:
#. The name of the only public method - except the generic ``register()`` method - must be ``authorize``
#. The attributes are - in exactly that order: ``api_handle``, ``user``, ``resource``, ``arg1``, ``arg2``
#. The ``api_handle`` must be the main ``CobblerAPI`` instance.
#. The ``user`` and ``resource`` attribute must be of type ``str``.
#. The attributes ``arg1`` and ``arg2`` are reserved for the individual use of your authorization module and may have
any type and form your desire.
#. The method must return an integer in all cases.
#. The method should return ``1`` for success and ``0`` for an authorization failure.
#. Additional codes can be defined, however they should be documented in the module description.
#. The values of additional codes should be positive integers.
#. Errors should result in the return of ``-1`` and a log message to the standard Python logger obtioned via
``logging.getLogger()``.
#. The return value of ``register()`` must be ``authz``.
"""
| 1,247
|
Python
|
.py
| 18
| 67.833333
| 118
| 0.729421
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,176
|
ownership.py
|
cobbler_cobbler/cobbler/modules/authorization/ownership.py
|
"""
Authorization module that allow users listed in
/etc/cobbler/users.conf to be permitted to access resources, with
the further restriction that Cobbler objects can be edited to only
allow certain users/groups to access those specific objects.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import os
from configparser import ConfigParser
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.abstract.base_item import BaseItem
from cobbler.items.profile import Profile
from cobbler.items.system import System
def register() -> str:
"""
The mandatory Cobbler module registration hook.
:return: Always "authz"
"""
return "authz"
def __parse_config() -> Dict[str, Dict[str, Any]]:
"""
Parse the "users.conf" of Cobbler and return all data in a dictionary.
:return: The data separated by sections. Each section has a subdictionary with the key-value pairs.
:raises FileNotFoundError
"""
etcfile = "/etc/cobbler/users.conf"
if not os.path.exists(etcfile):
raise FileNotFoundError("/etc/cobbler/users.conf does not exist")
# Make users case sensitive to handle kerberos
config = ConfigParser()
config.optionxform = lambda optionstr: optionstr
config.read(etcfile)
alldata: Dict[str, Dict[str, Any]] = {}
sections = config.sections()
for group in sections:
alldata[str(group)] = {}
options = config.options(group)
for option in options:
alldata[group][option] = 1
return alldata
def __authorize_autoinst(
api_handle: "CobblerAPI", groups: List[str], user: str, autoinst: str
) -> int:
"""
The authorization rules for automatic installation file editing are a bit of a special case. Non-admin users can
edit a automatic installation file only if all objects that depend on that automatic installation file are editable
by the user in question.
Example:
if Pinky owns ProfileA
and the Brain owns ProfileB
and both profiles use the same automatic installation template
and neither Pinky nor the Brain is an admin
neither is allowed to edit the automatic installation template
because they would make unwanted changes to each other
In the above scenario the UI will explain the problem and ask that the user asks the admin to resolve it if
required.
NOTE: this function is only called by authorize so admin users are cleared before this function is called.
:param api_handle: The api to resolve required information.
:param groups: The groups a user is in.
:param user: The user which is asking for access.
:param autoinst: The automatic installation in question.
:return: ``1`` if the user is allowed and otherwise ``0``.
"""
lst: List[Union["Profile", "System"]] = []
my_profiles = api_handle.find_profile(autoinst=autoinst, return_list=True)
if my_profiles is None or not isinstance(my_profiles, list):
raise ValueError("list of profile was not a list")
lst.extend(my_profiles)
del my_profiles
my_systems = api_handle.find_system(autoinst=autoinst, return_list=True)
if my_systems is None or not isinstance(my_systems, list):
raise ValueError("list of profile was not a list")
lst.extend(my_systems)
del my_systems
for obj in lst:
if not __is_user_allowed(obj, groups, user, "write_autoinst", autoinst, None):
return 0
return 1
def __authorize_snippet(
api_handle: "CobblerAPI", groups: List[str], user: str, autoinst: str
) -> int:
"""
Only allow admins to edit snippets -- since we don't have detection to see where each snippet is in use.
:param api_handle: Unused parameter.
:param groups: The group which is asking for access.
:param user: Unused parameter.
:param autoinst: Unused parameter.
:return: ``1`` if the group is allowed, otherwise ``0``.
"""
del api_handle, user, autoinst # unused
for group in groups:
if group not in ["admins", "admin"]:
return 0
return 1
def __is_user_allowed(
obj: "BaseItem", groups: List[str], user: str, resource: Any, arg1: Any, arg2: Any
) -> int:
"""
Check if a user is allowed to access the resource in question.
:param obj: The object which is in question.
:param groups: The groups a user is belonging to.
:param user: The user which is demanding access to the ``obj``.
:param resource: Unused parameter.
:param arg1: Unused parameter.
:param arg2: Unused parameter.
:return: ``1`` if user is allowed, otherwise ``0``.
"""
del resource, arg1, arg2 # unused
if user == "<DIRECT>":
# system user, logged in via web.ss
return 1
for group in groups:
if group in ["admins", "admin"]:
return 1
if obj.owners == []:
return 1
for allowed in obj.owners:
if user == allowed:
# user match
return 1
# else look for a group match
for group in groups:
if group == allowed:
return 1
return 0
def authorize(
api_handle: "CobblerAPI",
user: str,
resource: str,
arg1: Optional[str] = None,
arg2: Any = None,
) -> int:
"""
Validate a user against a resource. All users in the file are permitted by this module.
:param api_handle: The api to resolve required information.
:param user: The user to authorize to the resource.
:param resource: The resource the user is asking for access. This is something abstract like a remove operation.
:param arg1: This is normally the name of the specific object in question.
:param arg2: This parameter is pointless currently. Reserved for future code.
:return: ``1`` if okay, otherwise ``0``.
"""
if user == "<DIRECT>":
# CLI should always be permitted
return 1
# Everybody can get read-only access to everything if they pass authorization, they don't have to be in users.conf
if resource is not None: # type: ignore[reportUnnecessaryComparison]
# FIXME: /cobbler/web should not be subject to user check in any case
for user_resource in ["get", "read", "/cobbler/web"]:
if resource.startswith(user_resource):
return 1 # read operation is always ok.
user_groups = __parse_config()
# classify the type of operation
modify_operation = False
for criteria in [
"save",
"copy",
"rename",
"remove",
"modify",
"edit",
"xapi",
"background",
]:
if resource.find(criteria) != -1:
modify_operation = True
# FIXME: is everyone allowed to copy? I think so.
# FIXME: deal with the problem of deleted parents and promotion
found_user = False
found_groups: List[str] = []
grouplist = list(user_groups.keys())
for group in grouplist:
for group_user in user_groups[group]:
if group_user == user:
found_groups.append(group)
found_user = True
# if user is in the admin group, always authorize
# regardless of the ownership of the object.
if group in ("admins", "admin"):
return 1
if not found_user:
# if the user isn't anywhere in the file, reject regardless
# they can still use read-only XMLRPC
return 0
if not modify_operation:
# sufficient to allow access for non save/remove ops to all
# users for now, may want to refine later.
return 1
# Now we have a modify_operation op, so we must check ownership of the object. Remove ops pass in arg1 as a string
# name, saves pass in actual objects, so we must treat them differently. Automatic installaton files are even more
# special so we call those out to another function, rather than going through the rest of the code here.
if arg1 is None:
# We require to have an item name available, otherwise no search can be performed.
return 0
if resource.find("write_autoinstall_template") != -1:
return __authorize_autoinst(api_handle, found_groups, user, arg1)
if resource.find("read_autoinstall_template") != -1:
return 1
# The API for editing snippets also needs to do something similar. As with automatic installation files, though
# since they are more widely used it's more restrictive.
if resource.find("write_autoinstall_snippet") != -1:
return __authorize_snippet(api_handle, found_groups, user, arg1)
if resource.find("read_autoinstall_snipppet") != -1:
return 1
obj = None
if resource.find("remove") != -1:
if resource == "remove_distro":
obj = api_handle.find_distro(arg1)
elif resource == "remove_profile":
obj = api_handle.find_profile(arg1)
elif resource == "remove_system":
obj = api_handle.find_system(arg1)
elif resource == "remove_repo":
obj = api_handle.find_repo(arg1)
elif resource == "remove_image":
obj = api_handle.find_image(arg1)
elif resource.find("save") != -1 or resource.find("modify") != -1:
obj = api_handle.find_items(what="", name=arg1)
if obj is None or isinstance(obj, list):
raise ValueError("Object not found or found multiple times!")
# if the object has no ownership data, allow access regardless
if obj is None or obj.owners is None or obj.owners == []: # type: ignore[reportUnnecessaryComparison]
return 1
return __is_user_allowed(obj, found_groups, user, resource, arg1, arg2)
| 9,920
|
Python
|
.py
| 227
| 37.039648
| 119
| 0.671363
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,177
|
sqlite.py
|
cobbler_cobbler/cobbler/modules/serializers/sqlite.py
|
"""
Cobbler's SQLite database based object serializer.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2024 Yuriy Chelpanov <yuriy.chelpanov@gmail.com>
import json
import logging
import os
import sqlite3
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from cobbler.cexceptions import CX
from cobbler.modules.serializers import StorageBase
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.collection import ITEM, Collection
from cobbler.items.abstract.base_item import BaseItem
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "serializer"
def what() -> str:
"""
Module identification function
"""
return "serializer/sqlite"
class SQLiteSerializer(StorageBase):
"""
Each collection is stored in a separate table named distros, profiles, etc.
Tables are created on demand, when the first object of this type is written.
TABLE name // name from collection.collection_types()
(
name TEXT PRIMARY KEY, // name from item.name
item TEXT // JSON representation of an object
)
"""
def __init__(self, api: "CobblerAPI"):
super().__init__(api)
self.logger = logging.getLogger()
self.connection: Optional[sqlite3.Connection] = None
self.arraysize = 1000
self.database_file = "/var/lib/cobbler/collections/collections.db"
def __connect(self) -> None:
"""
Connect to the sqlite.
"""
if self.connection is not None:
return
is_new_database = not os.path.isfile(self.database_file)
conn = sqlite3.connect(":memory:")
threadsafety_option = conn.execute(
"""
select * from pragma_compile_options
where compile_options like 'THREADSAFE=%'
"""
).fetchone()[0]
conn.close()
threadsafety = int(threadsafety_option.split("=")[1])
if threadsafety != 1:
raise CX(
f"You cannot use SQLite compiled with SQLITE_THREADSAFE={threadsafety} with Cobbler.\n"
"Please compile the code with the option SQLITE_THREADSAFE=1"
)
try:
self.connection = sqlite3.connect(
self.database_file,
detect_types=sqlite3.PARSE_DECLTYPES,
check_same_thread=False,
)
except sqlite3.DatabaseError as error:
raise CX(
f'Unable to connect to SQLite database "{self.database_file}": {error}'
) from error
if is_new_database:
self.logger.info(
'Database with name "{%s}" was not found and will be created.',
self.database_file,
)
def __create_table(self, table_name: str) -> None:
"""
Creates a new SQLite table.
:param table_name: The table name.
"""
try:
self.connection.execute(f"CREATE TABLE {table_name}(name text primary key, item text)") # type: ignore
except sqlite3.DatabaseError as error:
raise CX(f'Unable to create table "{table_name}": {error}') from error
def __is_table_exists(self, table_name: str) -> bool:
"""
Return True if the table exists.
:param table_name: The table name.
:return: True if the table exists. Otherwise false.
"""
cursor = self.connection.execute( # type: ignore
"SELECT name FROM sqlite_master WHERE name=:name", {"name": table_name}
)
if cursor.fetchone() is None:
return False
return True
def __upsert_items(
self, table_name: str, bind_vars: List[Optional[Dict[str, str]]]
) -> None:
"""
Insert/Update values into the table.
:param table_name: The table name.
:param bind_vars: The list of bind variables for SQL statement.
"""
if len(bind_vars) == 0:
return
self.__connect()
if not self.__is_table_exists(table_name):
self.__create_table(table_name)
try:
self.connection.executemany( # type: ignore
f"INSERT INTO {table_name}(name, item) " # nosec
"VALUES(:name, :item) "
"ON CONFLICT(name) DO UPDATE SET item=excluded.item",
bind_vars, # type: ignore
)
self.connection.commit() # type: ignore
except sqlite3.DatabaseError as error:
raise CX(f'Unable to upsert into table "{table_name}": {error}') from error
def __build_bind_vars(self, item: "BaseItem") -> Dict[str, str]:
"""
Build the bind variables for Insert/Update.
:param item: The object for Insert/Update.
:return: The bind variables dict.
"""
if self.api.settings().serializer_pretty_json:
sort_keys = True
indent = 4
else:
sort_keys = False
indent = None
_dict = item.serialize()
data = json.dumps(_dict, sort_keys=sort_keys, indent=indent)
return {"name": item.name, "item": data}
def serialize_item(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
"""
Save a collection item to table
:param collection: The Cobbler collection to know the type of the item.
:param item: The collection item to serialize.
"""
if not item.name:
raise CX("name unset for item!")
self.__connect()
self.__upsert_items(
collection.collection_types(), [self.__build_bind_vars(item)]
)
def serialize(self, collection: "Collection[ITEM]") -> None:
"""
Save a collection to disk
:param collection: The collection to serialize.
"""
self.__connect()
ctype = collection.collection_types()
bind_vars: List[Optional[Dict[str, str]]] = []
for item in collection:
bind_vars.append(self.__build_bind_vars(item))
self.__upsert_items(ctype, bind_vars)
def serialize_delete(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
"""
Delete a collection item from table
:param collection: The Cobbler collection to know the type of the item.
:param item: The collection item to delete.
"""
self.__connect()
table_name = collection.collection_types()
try:
self.connection.execute( # type: ignore
f"DELETE FROM {table_name} WHERE name=:name", # nosec
{"name": item.name},
)
self.connection.commit() # type: ignore
except sqlite3.DatabaseError as error:
raise CX(
f'Unable to delete from table "{table_name}": {error}' # nosec
) from error
def deserialize_raw(self, collection_type: str) -> List[Dict[str, Any]]:
"""
Read the collection from the table.
:param collection_type: The collection type to read.
:return: The list of collection dicts.
"""
self.__connect()
results: List[Dict[str, Any]] = []
if not self.__is_table_exists(collection_type):
return results
projection = "item"
lazy_start = self.api.settings().lazy_start
if lazy_start:
projection = "name"
try:
cursor = self.connection.execute( # type: ignore
f"SELECT {projection} FROM {collection_type}" # nosec
)
except sqlite3.DatabaseError as error:
raise CX(
f'Unable to SELECT from table "{collection_type}": {error}' # nosec
) from error
cursor.arraysize = self.arraysize
for result in cursor.fetchall():
if lazy_start:
_dict = {"name": result[0]}
else:
_dict = json.loads(result[0])
_dict["inmemory"] = not lazy_start
results.append(_dict)
return results
def deserialize(
self, collection: "Collection[ITEM]", topological: bool = True
) -> None:
"""
Load a collection from disk.
:param collection: The Cobbler collection to know the type of the item.
:param topological: Sort collection based on each items' depth attribute in the list of collection items. This
ensures properly ordered object loading from disk with objects having parent/child
relationships, i.e. profiles/subprofiles. See cobbler/items/abstract/inheritable_item.py
"""
self.__connect()
datastruct = self.deserialize_raw(collection.collection_types())
if topological and isinstance(datastruct, list): # type: ignore
datastruct.sort(key=lambda x: x.get("depth", 1)) # type: ignore
collection.from_list(datastruct) # type: ignore
def deserialize_item(self, collection_type: str, name: str) -> Dict[str, Any]:
"""
Get a collection item from disk and parse it into an object.
:param collection_type: The collection type to deserialize.
:param item_name: The collection item name to deserialize.
:return: Dictionary of the collection item.
"""
self.__connect()
if not self.__is_table_exists(collection_type):
raise CX(
f"Item {name} of collection {collection_type} was not found in SQLite database {self.database_file}!"
)
try:
cursor = self.connection.execute( # type: ignore
f"SELECT item from {collection_type} WHERE name=:name", # nosec
{"name": name},
)
except sqlite3.DatabaseError as error:
raise CX(
f'Unable to SELECT from table "{collection_type}": {error}' # nosec
) from error
result = cursor.fetchone()
if result is None:
raise CX(
f"Item {name} of collection {collection_type} was not found in SQLite database {self.database_file}!"
)
_dict = json.loads(result[0])
if _dict["name"] != name:
raise CX(
f"The file name {name} does not match the {_dict['name']} {collection_type}!"
)
_dict["inmemory"] = True
return _dict
def storage_factory(api: "CobblerAPI") -> SQLiteSerializer:
"""
TODO
"""
return SQLiteSerializer(api)
| 10,673
|
Python
|
.py
| 262
| 30.866412
| 118
| 0.592457
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,178
|
file.py
|
cobbler_cobbler/cobbler/modules/serializers/file.py
|
"""
Cobbler's file-based object serializer.
As of 9/2014, this is Cobbler's default serializer and the most stable one.
It uses multiple JSON files in /var/lib/cobbler/collections/distros, profiles, etc
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import glob
import json
import logging
import os
from typing import TYPE_CHECKING, Any, Dict, List
import cobbler.api as capi
from cobbler.cexceptions import CX
from cobbler.modules.serializers import StorageBase
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.collection import ITEM, Collection
logger = logging.getLogger()
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
return "serializer"
def what() -> str:
"""
Module identification function
"""
return "serializer/file"
def _find_double_json_files(filename: str) -> None:
"""
Finds a file with duplicate .json ending and renames it.
:param filename: Filename to be checked
:raises FileExistsError: If both JSON files exist
"""
if not os.path.isfile(filename):
if os.path.isfile(filename + ".json"):
os.rename(filename + ".json", filename)
else:
if os.path.isfile(filename + ".json"):
raise FileExistsError(f"Both JSON files ({filename}) exist!")
class FileSerializer(StorageBase):
"""
TODO
"""
def __init__(self, api: "CobblerAPI") -> None:
super().__init__(api)
self.libpath = "/var/lib/cobbler/collections"
def serialize_item(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
if not item.name:
raise CX("name unset for item!")
collection_types = collection.collection_types()
filename = os.path.join(self.libpath, collection_types, item.name + ".json")
_find_double_json_files(filename)
if capi.CobblerAPI().settings().serializer_pretty_json:
sort_keys = True
indent = 4
else:
sort_keys = False
indent = None
_dict = item.serialize()
with open(filename, "w", encoding="UTF-8") as file_descriptor:
data = json.dumps(_dict, sort_keys=sort_keys, indent=indent)
file_descriptor.write(data)
def serialize_delete(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
collection_types = collection.collection_types()
filename = os.path.join(self.libpath, collection_types, item.name + ".json")
_find_double_json_files(filename)
if os.path.exists(filename):
os.remove(filename)
def serialize(self, collection: "Collection[ITEM]") -> None:
# do not serialize settings
if collection.collection_type() != "setting":
for item in collection:
self.serialize_item(collection, item)
def deserialize_raw(self, collection_type: str) -> List[Dict[str, Any]]:
results: List[Dict[str, Any]] = []
path = os.path.join(self.libpath, collection_type)
all_files = glob.glob(f"{path}/*.json")
lazy_start = self.api.settings().lazy_start
for file in all_files:
(name, _) = os.path.splitext(os.path.basename(file))
if lazy_start:
_dict = {"name": name, "inmemory": False}
else:
with open(file, encoding="UTF-8") as file_descriptor:
json_data = file_descriptor.read()
_dict = json.loads(json_data)
if _dict["name"] != name:
raise CX(
f"The file name {name}.json does not match the {_dict['name']} {collection_type}!"
)
results.append(_dict)
return results # type: ignore
def deserialize(
self, collection: "Collection[ITEM]", topological: bool = True
) -> None:
datastruct = self.deserialize_raw(collection.collection_types())
if topological:
datastruct.sort(key=lambda x: x.get("depth", 1))
try:
if isinstance(datastruct, dict):
# This is currently the corner case for the settings type.
collection.from_dict(datastruct) # type: ignore
elif isinstance(datastruct, list): # type: ignore
collection.from_list(datastruct) # type: ignore
except Exception as exc:
logger.error(
"Error while loading a collection: %s. Skipping collection %s!",
exc,
collection.collection_type(),
)
def deserialize_item(self, collection_type: str, name: str) -> Dict[str, Any]:
"""
Get a collection item from disk and parse it into an object.
:param collection_type: The collection type to fetch.
:param name: collection Item name
:return: Dictionary of the collection item.
"""
path = os.path.join(self.libpath, collection_type, f"{name}.json")
with open(path, encoding="UTF-8") as file_descriptor:
json_data = file_descriptor.read()
_dict = json.loads(json_data)
if _dict["name"] != name:
raise CX(
f"The file name {name}.json does not match the {_dict['name']} {collection_type}!"
)
_dict["inmemory"] = True
return _dict
def storage_factory(api: "CobblerAPI") -> FileSerializer:
"""
TODO
"""
return FileSerializer(api)
| 5,685
|
Python
|
.py
| 135
| 33.125926
| 110
| 0.616667
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,179
|
__init__.py
|
cobbler_cobbler/cobbler/modules/serializers/__init__.py
|
"""
This module contains code to persist the in memory state of Cobbler on a target. The name of the target should be the
name of the Python file. Cobbler is currently only tested against the file serializer.
"""
from typing import TYPE_CHECKING, Any, Dict, List
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.collection import ITEM, Collection
class StorageBase:
"""
TODO
"""
def __init__(self, api: "CobblerAPI"):
self.api = api
def serialize_item(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
"""
Save a collection item to disk
:param collection: The Cobbler collection to know the type of the item.
:param item: The collection item to serialize.
"""
raise NotImplementedError(
"The implementation for the configured serializer is missing!"
)
def serialize_delete(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
"""
Delete a collection item from disk.
:param collection: collection
:param item: collection item
"""
raise NotImplementedError(
"The implementation for the configured serializer is missing!"
)
def serialize(self, collection: "Collection[ITEM]") -> None:
"""
Save a collection to disk
:param collection: The collection to serialize.
"""
raise NotImplementedError(
"The implementation for the configured serializer is missing!"
)
def deserialize_raw(self, collection_type: str) -> List[Dict[str, Any]]:
"""
Read the collection from the disk.
:param collection_type: The collection type to read.
:return: The list of collection dicts.
"""
raise NotImplementedError(
"The implementation for the configured serializer is missing!"
)
def deserialize(
self, collection: "Collection[ITEM]", topological: bool = True
) -> None:
"""
Load a collection from disk.
:param collection: The Cobbler collection to know the type of the item.
:param topological: Sort collection based on each items' depth attribute in the list of collection items. This
ensures properly ordered object loading from disk with objects having parent/child
relationships, i.e. profiles/subprofiles. See cobbler/items/abstract/inheritable_item.py
"""
raise NotImplementedError(
"The implementation for the configured serializer is missing!"
)
def deserialize_item(self, collection_type: str, name: str) -> Dict[str, Any]:
"""
Get a collection item from disk and parse it into an object.
:param collection_type: The collection type to deserialize.
:param item_name: The collection item name to deserialize.
:return: Dictionary of the collection item.
"""
raise NotImplementedError(
"The implementation for the configured serializer is missing!"
)
def register() -> str:
"""
TODO
"""
return "StorageBase"
def what() -> str:
"""
TODO
"""
return "serializer/base"
def storage_factory(api: "CobblerAPI") -> StorageBase:
"""
TODO
"""
return StorageBase(api)
| 3,400
|
Python
|
.py
| 87
| 31.149425
| 118
| 0.648024
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,180
|
mongodb.py
|
cobbler_cobbler/cobbler/modules/serializers/mongodb.py
|
"""
Cobbler's Mongo database based object serializer.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: James Cammarata <jimi@sngx.net>
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional
from cobbler.cexceptions import CX
from cobbler.modules.serializers import StorageBase
if TYPE_CHECKING:
from pymongo.database import Database
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.collection import ITEM, Collection
try:
# pylint: disable-next=ungrouped-imports
from pymongo.errors import ConfigurationError, ConnectionFailure, OperationFailure
from pymongo.mongo_client import MongoClient
PYMONGO_LOADED = True
except ModuleNotFoundError:
# pylint: disable=invalid-name
ConfigurationError = None
ConnectionFailure = None
OperationFailure = None
# This is a constant! pyright just doesn't understand it.
PYMONGO_LOADED = False # type: ignore
def register() -> str:
"""
The mandatory Cobbler module registration hook.
"""
# FIXME: only run this if enabled.
if not PYMONGO_LOADED:
return ""
return "serializer"
def what() -> str:
"""
Module identification function
"""
return "serializer/mongodb"
class MongoDBSerializer(StorageBase):
"""
TODO
"""
def __init__(self, api: "CobblerAPI"):
super().__init__(api)
self.logger = logging.getLogger()
self.mongodb: Optional["MongoClient[Mapping[str, Any]]"] = None
self.mongodb_database: Optional["Database[Mapping[str, Any]]"] = None
self.database_name = "cobbler"
self.__connect()
def __connect(self) -> None:
"""
Reads the config file for mongodb and then connects to the mongodb.
"""
if ConnectionFailure is None or ConfigurationError is None:
raise ImportError("MongoDB is not correctly imported!")
host = self.api.settings().mongodb.get("host", "localhost")
port = self.api.settings().mongodb.get("port", 27017)
# TODO: Make database name configurable in settings
# TODO: Make authentication configurable
self.mongodb = MongoClient(host, port) # type: ignore
try:
# The ismaster command is cheap and doesn't require auth.
self.mongodb.admin.command("ping") # type: ignore
except ConnectionFailure as error:
raise CX("Unable to connect to Mongo database.") from error
except ConfigurationError as error:
raise CX(
"The configuration of the MongoDB connection isn't correct, please check the Cobbler settings."
) from error
if self.database_name not in self.mongodb.list_database_names(): # type: ignore
self.logger.info(
'Database with name "%s" was not found and will be created.',
self.database_name,
)
self.mongodb_database = self.mongodb["cobbler"] # type: ignore
def _rename_collection(self, old_collection: str, new_collection: str) -> None:
"""
Rename a collection in database.
:param old_collection: Previous collection name.
:param old_collection: New collection name.
"""
if OperationFailure is None:
raise ImportError("MongoDB not correctly imported!")
if (
old_collection != "setting"
and old_collection in self.mongodb_database.list_collection_names() # type: ignore
):
try:
self.mongodb_database[old_collection].rename(new_collection) # type: ignore
except OperationFailure as error:
raise CX(
f'Cannot rename MongoDB collection from "{old_collection}" to "{new_collection}": {error}.'
) from error
def serialize_item(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
if self.mongodb_database is None:
raise ValueError("Database not available!")
mongodb_collection = self.mongodb_database[collection.collection_types()]
data = mongodb_collection.find_one({"name": item.name})
if data:
mongodb_collection.replace_one({"name": item.name}, item.serialize()) # type: ignore
else:
mongodb_collection.insert_one(item.serialize()) # type: ignore
def serialize_delete(self, collection: "Collection[ITEM]", item: "ITEM") -> None:
if self.mongodb_database is None:
raise ValueError("Database not available!")
mongodb_collection = self.mongodb_database[collection.collection_types()]
mongodb_collection.delete_one({"name": item.name}) # type: ignore
def serialize(self, collection: "Collection[ITEM]") -> None:
# TODO: error detection
ctype = collection.collection_types()
if ctype != "settings":
for item in collection:
self.serialize_item(collection, item)
def deserialize_raw(self, collection_type: str) -> List[Dict[str, Any]]:
if self.mongodb_database is None:
raise ValueError("Database not available!")
results = []
projection = None
collection = self.mongodb_database[collection_type]
lazy_start = self.api.settings().lazy_start
if lazy_start:
projection = ["name"]
# pymongo.cursor.Cursor
cursor = collection.find(projection=projection)
for result in cursor:
self._remove_id(result)
result["inmemory"] = not lazy_start # type: ignore
results.append(result) # type: ignore
return results # type: ignore
def deserialize(
self, collection: "Collection[ITEM]", topological: bool = True
) -> None:
self._rename_collection(
collection.collection_type(), collection.collection_types()
)
datastruct = self.deserialize_raw(collection.collection_types())
if topological and isinstance(datastruct, list): # type: ignore
datastruct.sort(key=lambda x: x.get("depth", 1)) # type: ignore
if isinstance(datastruct, dict):
# This is currently the corner case for the settings type.
collection.from_dict(datastruct) # type: ignore
elif isinstance(datastruct, list): # type: ignore
collection.from_list(datastruct) # type: ignore
def deserialize_item(self, collection_type: str, name: str) -> Dict[str, Any]:
"""
Get a collection item from database.
:param collection_type: The collection type to fetch.
:param name: collection Item name
:return: Dictionary of the collection item.
"""
if self.mongodb_database is None:
raise ValueError("Database not available!")
mongodb_collection = self.mongodb_database[collection_type]
result = mongodb_collection.find_one({"name": name})
if result is None:
raise CX(
f"Item {name} of collection {collection_type} was not found in MongoDB database {self.database_name}!"
)
self._remove_id(result)
result["inmemory"] = True # type: ignore
return result # type: ignore
@staticmethod
def _remove_id(_dict: Mapping[str, Any]):
if "_id" in _dict:
_dict.pop("_id") # type: ignore
def storage_factory(api: "CobblerAPI") -> MongoDBSerializer:
"""
TODO
"""
return MongoDBSerializer(api)
| 7,696
|
Python
|
.py
| 172
| 36.197674
| 118
| 0.649053
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,181
|
hardlink.py
|
cobbler_cobbler/cobbler/actions/hardlink.py
|
"""
Hard links Cobbler content together to save space.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: 2021 Enno Gotthold <egotthold@suse.de>
# SPDX-FileCopyrightText: Copyright SUSE LLC
import logging
import os
from typing import TYPE_CHECKING
from cobbler import utils
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class HardLinker:
"""
TODO
"""
def __init__(self, api: "CobblerAPI") -> None:
"""
Constructor
:param api: The API to resolve information with.
"""
self.api = api
self.hardlink = ""
self.logger = logging.getLogger()
self.family = utils.get_family()
self.webdir = self.api.settings().webdir
# Getting the path to hardlink
for possible_location in ["/usr/bin/hardlink", "/usr/sbin/hardlink"]:
if os.path.exists(possible_location):
self.hardlink = possible_location
if not self.hardlink:
utils.die("please install 'hardlink' to use this feature")
def run(self) -> int:
"""
Simply hardlinks directories that are Cobbler managed.
"""
self.logger.info("now hardlinking to save space, this may take some time.")
# Setting the args for hardlink according to the distribution. Must end with a space!
if self.family == "debian":
hardlink_args = ["-f", "-p", "-o", "-t", "-v"]
elif self.family == "suse":
hardlink_args = ["-f", "-v"]
else:
hardlink_args = ["-c", "-v"]
hardlink_cmd = (
[self.hardlink]
+ hardlink_args
+ [f"{self.webdir}/distro_mirror", f"{self.webdir}/repo_mirror"]
)
utils.subprocess_call(hardlink_cmd.copy(), shell=False)
hardlink_cmd = [
self.hardlink,
"-c",
"-v",
f"{self.webdir}/distro_mirror",
f"{self.webdir}/repo_mirror",
]
return utils.subprocess_call(hardlink_cmd.copy(), shell=False)
| 2,199
|
Python
|
.py
| 60
| 28.716667
| 93
| 0.606958
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,182
|
reposync.py
|
cobbler_cobbler/cobbler/actions/reposync.py
|
"""
Builds out and synchronizes yum repo mirrors.
Initial support for rsync, perhaps reposync coming later.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2007, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import logging
import os
import os.path
import pipes
import shutil
import stat
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple
from cobbler import utils
from cobbler.cexceptions import CX
from cobbler.enums import MirrorType, RepoArchs, RepoBreeds
from cobbler.utils import filesystem_helpers, os_release
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.repo import Repo
try:
import librepo # type: ignore
HAS_LIBREPO = True
except ModuleNotFoundError:
# This is a constant since it is only defined once.
HAS_LIBREPO = False # type: ignore
def repo_walker(
top: str, func: Callable[[Any, str, List[str]], None], arg: Any
) -> None:
"""
Directory tree walk with callback function.
For each directory in the directory tree rooted at top (including top itself, but excluding '.' and '..'), call
func(arg, dirname, fnames). dirname is the name of the directory, and fnames a list of the names of the files and
subdirectories in dirname (excluding '.' and '..'). func may modify the fnames list in-place (e.g. via del or slice
assignment), and walk will only recurse into the subdirectories whose names remain in fnames; this can be used to
implement a filter, or to impose a specific order of visiting. No semantics are defined for, or required of, arg,
beyond that arg is always passed to func. It can be used, e.g., to pass a filename pattern, or a mutable object
designed to accumulate statistics. Passing None for arg is common.
:param top: The directory that should be taken as root. The root dir will also be included in the processing.
:param func: The function that should be executed.
:param arg: The arguments for that function.
"""
try:
names = os.listdir(top)
# The order of the return is not guaranteed and seems to depend on the fileystem thus sort the list
names.sort()
except os.error:
return
func(arg, top, names)
for name in names:
path_name = os.path.join(top, name)
try:
file_stats = os.lstat(path_name)
except os.error:
continue
if stat.S_ISDIR(file_stats.st_mode):
repo_walker(path_name, func, arg)
class RepoSync:
"""
Handles conversion of internal state to the tftpboot tree layout.
"""
# ==================================================================================
def __init__(self, api: "CobblerAPI", tries: int = 1, nofail: bool = False) -> None:
"""
Constructor
:param api: The object which holds all information in Cobbler.
:param tries: The number of tries before the operation fails.
:param nofail: This sets the strictness of the reposync result handling.
"""
self.verbose = True
self.api = api
self.settings = api.settings()
self.repos = api.repos()
self.rflags = self.settings.reposync_flags.split()
self.tries = tries
self.nofail = nofail
self.logger = logging.getLogger()
self.logger.info("hello, reposync")
# ===================================================================
def run(self, name: Optional[str] = None, verbose: bool = True) -> None:
"""
Syncs the current repo configuration file with the filesystem.
:param name: The name of the repository to synchronize.
:param verbose: If the action should be logged verbose or not.
"""
self.logger.info("run, reposync, run!")
self.verbose = verbose
report_failure = False
for repo in self.repos:
if name is not None and repo.name != name:
# Invoked to sync only a specific repo, this is not the one
continue
if name is None and not repo.keep_updated:
# Invoked to run against all repos, but this one is off
self.logger.info("%s is set to not be updated", repo.name)
continue
repo_mirror = os.path.join(self.settings.webdir, "repo_mirror")
repo_path = os.path.join(repo_mirror, repo.name)
if not os.path.isdir(repo_path) and not repo.mirror.lower().startswith(
"rhn://"
):
os.makedirs(repo_path)
# Set the environment keys specified for this repo and save the old one if they modify an existing variable.
env = repo.environment
old_env: Dict[str, Any] = {}
for k in list(env.keys()):
self.logger.debug("setting repo environment: %s=%s", k, env[k])
if env[k] is not None:
if os.getenv(k):
old_env[k] = os.getenv(k)
else:
os.environ[k] = env[k]
# Which may actually NOT reposync if the repo is set to not mirror locally but that's a technicality.
success = False
for reposync_try in range(self.tries + 1, 1, -1):
try:
self.sync(repo)
success = True
break
except Exception:
success = False
utils.log_exc()
self.logger.warning(
"reposync failed, tries left: %s", (reposync_try - 2)
)
# Cleanup/restore any environment variables that were added or changed above.
for k in list(env.keys()):
if env[k] is not None:
if k in old_env:
self.logger.debug(
"resetting repo environment: %s=%s", k, old_env[k]
)
os.environ[k] = old_env[k]
else:
self.logger.debug("removing repo environment: %s=%s", k, env[k])
del os.environ[k]
if not success:
report_failure = True
if not self.nofail:
raise CX("reposync failed, retry limit reached, aborting")
self.logger.error("reposync failed, retry limit reached, skipping")
self.update_permissions(repo_path)
if report_failure:
raise CX("overall reposync failed, at least one repo failed to synchronize")
# ==================================================================================
def sync(self, repo: "Repo") -> None:
"""
Conditionally sync a repo, based on type.
:param repo: The repo to sync.
"""
if repo.breed == RepoBreeds.RHN:
self.rhn_sync(repo)
elif repo.breed == RepoBreeds.YUM:
self.yum_sync(repo)
elif repo.breed == RepoBreeds.APT:
self.apt_sync(repo)
elif repo.breed == RepoBreeds.RSYNC:
self.rsync_sync(repo)
elif repo.breed == RepoBreeds.WGET:
self.wget_sync(repo)
else:
raise CX(
f"unable to sync repo ({repo.name}), unknown or unsupported repo type ({repo.breed.value})"
)
# ====================================================================================
def librepo_getinfo(self, dirname: str) -> Dict[Any, Any]:
"""
Used to get records from a repomd.xml file of downloaded rpmmd repository.
:param dirname: The local path of rpmmd repository.
:return: The dict representing records from a repomd.xml file of rpmmd repository.
"""
# FIXME: Librepo has no typing stubs.
librepo_handle = librepo.Handle() # type: ignore
librepo_result = librepo.Result() # type: ignore
librepo_handle.setopt(librepo.LRO_REPOTYPE, librepo.LR_YUMREPO) # type: ignore
librepo_handle.setopt(librepo.LRO_URLS, [dirname]) # type: ignore
librepo_handle.setopt(librepo.LRO_LOCAL, True) # type: ignore
librepo_handle.setopt(librepo.LRO_CHECKSUM, True) # type: ignore
librepo_handle.setopt(librepo.LRO_IGNOREMISSING, True) # type: ignore
try:
librepo_handle.perform(librepo_result) # type: ignore
except librepo.LibrepoException as error: # type: ignore
raise CX("librepo error: " + dirname + " - " + error.args[1]) from error # type: ignore
return librepo_result.getinfo(librepo.LRR_RPMMD_REPOMD).get("records", {}) # type: ignore
# ====================================================================================
def createrepo_walker(self, repo: "Repo", dirname: str, fnames: Any) -> None:
"""
Used to run createrepo on a copied Yum mirror.
:param repo: The repository object to run for.
:param dirname: The directory to run in.
:param fnames: Not known what this is for.
"""
if os.path.exists(dirname) or repo.breed == RepoBreeds.RSYNC:
utils.remove_yum_olddata(dirname)
# add any repo metadata we can use
mdoptions: List[str] = []
origin_path = os.path.join(dirname, ".origin")
repodata_path = os.path.join(origin_path, "repodata")
if os.path.isfile(os.path.join(repodata_path, "repomd.xml")):
repo_data = self.librepo_getinfo(origin_path)
if "group" in repo_data:
groupmdfile = repo_data["group"]["location_href"]
mdoptions += ["-g", os.path.join(origin_path, groupmdfile)]
if "prestodelta" in repo_data:
# need createrepo >= 0.9.7 to add deltas
if utils.get_family() in ("redhat", "suse"):
cmd = [
"/usr/bin/rpmquery",
"--queryformat=%{VERSION}",
"createrepo",
]
createrepo_ver = utils.subprocess_get(cmd, shell=False)
if not createrepo_ver[0:1].isdigit():
cmd = [
"/usr/bin/rpmquery",
"--queryformat=%{VERSION}",
"createrepo_c",
]
createrepo_ver = utils.subprocess_get(cmd, shell=False)
if utils.compare_versions_gt(createrepo_ver, "0.9.7"):
mdoptions.append("--deltas")
else:
self.logger.error(
"this repo has presto metadata; you must upgrade createrepo to >= 0.9.7 "
"first and then need to resync the repo through Cobbler."
)
blended = utils.blender(self.api, False, repo)
flags = blended.get("createrepo_flags", "(ERROR: FLAGS)").split()
try:
cmd = ["createrepo"] + mdoptions + flags + [pipes.quote(dirname)]
utils.subprocess_call(cmd, shell=False)
except Exception:
utils.log_exc()
self.logger.error("createrepo failed.")
del fnames[:] # we're in the right place
# ====================================================================================
def wget_sync(self, repo: "Repo") -> None:
"""
Handle mirroring of directories using wget
:param repo: The repo object to sync via wget.
"""
mirror_program = "/usr/bin/wget"
if not os.path.exists(mirror_program):
raise CX(f"no {mirror_program} found, please install it")
if repo.mirror_type != MirrorType.BASEURL:
raise CX(
"mirrorlist and metalink mirror types is not supported for wget'd repositories"
)
if repo.rpm_list not in ("", []):
self.logger.warning("--rpm-list is not supported for wget'd repositories")
dest_path = os.path.join(self.settings.webdir, "repo_mirror", repo.name)
# FIXME: wrapper for subprocess that logs to logger
cmd = [
"wget",
"-N",
"-np",
"-r",
"-l",
"inf",
"-nd",
"-P",
pipes.quote(dest_path),
pipes.quote(repo.mirror),
]
return_value = utils.subprocess_call(cmd, shell=False)
if return_value != 0:
raise CX("cobbler reposync failed")
repo_walker(dest_path, self.createrepo_walker, repo)
self.create_local_file(dest_path, repo)
# ====================================================================================
def rsync_sync(self, repo: "Repo") -> None:
"""
Handle copying of rsync:// and rsync-over-ssh repos.
:param repo: The repo to sync via rsync.
"""
if not repo.mirror_locally:
raise CX(
"rsync:// urls must be mirrored locally, yum cannot access them directly"
)
if repo.mirror_type != MirrorType.BASEURL:
raise CX(
"mirrorlist and metalink mirror types is not supported for rsync'd repositories"
)
if repo.rpm_list not in ("", []):
self.logger.warning("--rpm-list is not supported for rsync'd repositories")
dest_path = os.path.join(self.settings.webdir, "repo_mirror", repo.name)
spacer: List[str] = []
if not repo.mirror.startswith("rsync://") and not repo.mirror.startswith("/"):
spacer = ["-e ssh"]
if not repo.mirror.endswith("/"):
repo.mirror = f"{repo.mirror}/"
flags: List[str] = []
for repo_option in repo.rsyncopts:
if repo.rsyncopts[repo_option]:
flags.append(f"{repo_option} {repo.rsyncopts[repo_option]}")
else:
flags.append(f"{repo_option}")
if not flags:
flags = self.settings.reposync_rsync_flags.split()
cmd = ["rsync"] + flags + ["--delete-after"]
cmd += spacer + [
"--delete",
"--exclude-from=/etc/cobbler/rsync.exclude",
pipes.quote(repo.mirror),
pipes.quote(dest_path),
]
return_code = utils.subprocess_call(cmd, shell=False)
if return_code != 0:
raise CX("cobbler reposync failed")
# If ran in archive mode then repo should already contain all repodata and does not need createrepo run
archive = False
if "--archive" in flags:
archive = True
else:
# skip all flags --{options} as we need to look for combined flags like -vaH
for option in flags:
if option.startswith("--"):
pass
else:
if "a" in option:
archive = True
break
if not archive:
repo_walker(dest_path, self.createrepo_walker, repo)
self.create_local_file(dest_path, repo)
# ====================================================================================
@staticmethod
def reposync_cmd() -> List[str]:
"""
Determine reposync command
:return: The path to the reposync command. If dnf exists it is used instead of reposync.
"""
if not HAS_LIBREPO:
raise CX("no librepo found, please install python3-librepo")
if os.path.exists("/usr/bin/dnf"):
cmd = ["/usr/bin/dnf", "reposync"]
elif os.path.exists("/usr/bin/reposync"):
cmd = ["/usr/bin/reposync"]
else:
# Warn about not having yum-utils. We don't want to require it in the package because Fedora 22+ has moved
# to dnf.
raise CX("no /usr/bin/reposync found, please install yum-utils")
return cmd
# ====================================================================================
def rhn_sync(self, repo: "Repo") -> None:
"""
Handle mirroring of RHN repos.
:param repo: The repo object to synchronize.
"""
# flag indicating not to pull the whole repo
has_rpm_list = False
# detect cases that require special handling
if repo.rpm_list not in ("", []):
has_rpm_list = True
# Create yum config file for use by reposync
# FIXME: don't hardcode
repos_path = os.path.join(self.settings.webdir, "repo_mirror")
dest_path = os.path.join(repos_path, repo.name)
temp_path = os.path.join(dest_path, ".origin")
if not os.path.isdir(temp_path):
# FIXME: there's a chance this might break the RHN D/L case
os.makedirs(temp_path)
# how we invoke reposync depends on whether this is RHN content or not.
# This is the somewhat more-complex RHN case.
# NOTE: this requires that you have entitlements for the server and you give the mirror as rhn://$channelname
if not repo.mirror_locally:
raise CX("rhn:// repos do not work with --mirror-locally=False")
if has_rpm_list:
self.logger.warning("warning: --rpm-list is not supported for RHN content")
rest = repo.mirror[6:] # everything after rhn://
if repo.name != rest:
args = {"name": repo.name, "rest": rest}
raise CX(
"ERROR: repository %(name)s needs to be renamed %(rest)s as the name of the cobbler repository "
"must match the name of the RHN channel" % args
)
arch = repo.arch.value
if arch == "i386":
# Counter-intuitive, but we want the newish kernels too
arch = "i686"
cmd = self.reposync_cmd()
cmd += self.rflags + [
f"--repo={pipes.quote(rest)}",
f"--download-path={pipes.quote(repos_path)}",
]
if arch != "none":
cmd.append(f'--arch="{arch}"')
# Now regardless of whether we're doing yumdownloader or reposync or whether the repo was http://, ftp://, or
# rhn://, execute all queued commands here. Any failure at any point stops the operation.
if repo.mirror_locally:
utils.subprocess_call(cmd, shell=False)
# Some more special case handling for RHN. Create the config file now, because the directory didn't exist
# earlier.
self.create_local_file(temp_path, repo, output=False)
# Now run createrepo to rebuild the index
if repo.mirror_locally:
repo_walker(dest_path, self.createrepo_walker, repo)
# Create the config file the hosts will use to access the repository.
self.create_local_file(dest_path, repo)
# ====================================================================================
def gen_urlgrab_ssl_opts(
self, yumopts: Dict[str, Any]
) -> Tuple[Optional[Tuple[Any, ...]], bool]:
"""
This function translates yum repository options into the appropriate options for python-requests
:param yumopts: The options to convert.
:return: A tuple with the cert and a boolean if it should be verified or not.
"""
# use SSL options if specified in yum opts
cert = None
sslcacert = None
verify = False
if "sslcacert" in yumopts:
sslcacert = yumopts["sslcacert"]
if "sslclientkey" and "sslclientcert" in yumopts:
cert = (sslcacert, yumopts["sslclientcert"], yumopts["sslclientkey"])
# Note that the default of requests is to verify the peer and host but the default here is NOT to verify them
# unless sslverify is explicitly set to 1 in yumopts.
if "sslverify" in yumopts:
verify = yumopts["sslverify"] == 1
return cert, verify
# ====================================================================================
def yum_sync(self, repo: "Repo") -> None:
"""
Handle copying of http:// and ftp:// yum repos.
:param repo: The yum reporitory to sync.
"""
# create the config file the hosts will use to access the repository.
repos_path = os.path.join(self.settings.webdir, "repo_mirror")
dest_path = os.path.join(repos_path, repo.name)
self.create_local_file(dest_path, repo)
if not repo.mirror_locally:
return
# command to run
cmd = self.reposync_cmd()
# flag indicating not to pull the whole repo
has_rpm_list = False
# detect cases that require special handling
if repo.rpm_list not in ("", []):
has_rpm_list = True
# create yum config file for use by reposync
temp_path = os.path.join(dest_path, ".origin")
if not os.path.isdir(temp_path):
# FIXME: there's a chance this might break the RHN D/L case
os.makedirs(temp_path)
temp_file = self.create_local_file(temp_path, repo, output=False)
arch = repo.arch.value
if arch == "i386":
# Counter-intuitive, but we want the newish kernels too
arch = "i686"
if not has_rpm_list:
# If we have not requested only certain RPMs, use reposync
cmd += self.rflags + [
f"--config={temp_file}",
f"--repoid={pipes.quote(repo.name)}",
f"--download-path={pipes.quote(repos_path)}",
]
if arch != "none":
cmd.append(f"--arch={arch}")
else:
# Create the output directory if it doesn't exist
if not os.path.exists(dest_path):
os.makedirs(dest_path)
# Older yumdownloader sometimes explodes on --resolvedeps if this happens to you, upgrade yum & yum-utils
extra_flags = self.settings.yumdownloader_flags.split()
cmd = [
"/usr/bin/dnf",
"download",
] + extra_flags
if arch == "src":
cmd.append("--source")
cmd += [
"--disablerepo=*",
f"--enablerepo={pipes.quote(repo.name)}",
f"-c={temp_file}",
f"--destdir={pipes.quote(dest_path)}",
]
cmd += repo.rpm_list
# Now regardless of whether we're doing yumdownloader or reposync or whether the repo was http://, ftp://, or
# rhn://, execute all queued commands here. Any failure at any point stops the operation.
return_code = utils.subprocess_call(cmd, shell=False)
if return_code != 0:
raise CX("cobbler reposync failed")
# download any metadata we can use
proxy = None
if repo.proxy not in ("<<None>>", ""):
proxy = repo.proxy
(cert, verify) = self.gen_urlgrab_ssl_opts(repo.yumopts)
repodata_path = os.path.join(dest_path, "repodata")
repomd_path = os.path.join(repodata_path, "repomd.xml")
if os.path.exists(repodata_path) and not os.path.isfile(repomd_path):
shutil.rmtree(repodata_path, ignore_errors=False, onerror=None)
repodata_path = os.path.join(temp_path, "repodata")
if os.path.exists(repodata_path):
self.logger.info("Deleted old repo metadata for %s", repodata_path)
shutil.rmtree(repodata_path, ignore_errors=False, onerror=None)
librepo_handle = librepo.Handle() # type: ignore
librepo_result = librepo.Result() # type: ignore
librepo_handle.setopt(librepo.LRO_REPOTYPE, librepo.LR_YUMREPO) # type: ignore
librepo_handle.setopt(librepo.LRO_CHECKSUM, True) # type: ignore
librepo_handle.setopt(librepo.LRO_DESTDIR, temp_path) # type: ignore
if repo.mirror_type == MirrorType.METALINK:
librepo_handle.setopt(librepo.LRO_METALINKURL, repo.mirror) # type: ignore
elif repo.mirror_type == MirrorType.MIRRORLIST:
librepo_handle.setopt(librepo.LRO_MIRRORLISTURL, repo.mirror) # type: ignore
elif repo.mirror_type == MirrorType.BASEURL:
librepo_handle.setopt(librepo.LRO_URLS, [repo.mirror]) # type: ignore
if verify:
librepo_handle.setopt(librepo.LRO_SSLVERIFYPEER, True) # type: ignore
librepo_handle.setopt(librepo.LRO_SSLVERIFYHOST, True) # type: ignore
if cert:
sslcacert, sslclientcert, sslclientkey = cert
librepo_handle.setopt(librepo.LRO_SSLCACERT, sslcacert) # type: ignore
librepo_handle.setopt(librepo.LRO_SSLCLIENTCERT, sslclientcert) # type: ignore
librepo_handle.setopt(librepo.LRO_SSLCLIENTKEY, sslclientkey) # type: ignore
if proxy:
librepo_handle.setopt(librepo.LRO_PROXY, proxy) # type: ignore
librepo_handle.setopt(librepo.LRO_PROXYTYPE, librepo.PROXY_HTTP) # type: ignore
try:
librepo_handle.perform(librepo_result) # type: ignore
except librepo.LibrepoException as exception: # type: ignore
raise CX(
"librepo error: " + temp_path + " - " + exception.args[1] # type: ignore
) from exception
# now run createrepo to rebuild the index
if repo.mirror_locally:
repo_walker(dest_path, self.createrepo_walker, repo)
# ====================================================================================
def apt_sync(self, repo: "Repo") -> None:
"""
Handle copying of http:// and ftp:// debian repos.
:param repo: The apt repository to sync.
"""
# Warn about not having mirror program.
mirror_program = "/usr/bin/debmirror"
if not os.path.exists(mirror_program):
raise CX(f"no {mirror_program} found, please install it")
# detect cases that require special handling
if repo.rpm_list not in ("", []):
raise CX("has_rpm_list not yet supported on apt repos")
if repo.arch == RepoArchs.NONE:
raise CX("Architecture is required for apt repositories")
if repo.mirror_type != MirrorType.BASEURL:
raise CX(
"mirrorlist and metalink mirror types is not supported for apt repositories"
)
# built destination path for the repo
dest_path = os.path.join(self.settings.webdir, "repo_mirror", repo.name)
if repo.mirror_locally:
# NOTE: Dropping @@suite@@ replace as it is also dropped from "from manage_import_debian"_ubuntu.py due that
# repo has no os_version attribute. If it is added again it will break the Web UI!
# mirror = repo.mirror.replace("@@suite@@",repo.os_version)
mirror = repo.mirror
idx = mirror.find("://")
method = mirror[:idx]
mirror = mirror[idx + 3 :]
idx = mirror.find("/")
host = mirror[:idx]
mirror = mirror[idx:]
dists = ",".join(repo.apt_dists)
components = ",".join(repo.apt_components)
mirror_data = [
f"--method={pipes.quote(method)}",
f"--host={pipes.quote(host)}",
f"--root={pipes.quote(mirror)}",
f"--dist={pipes.quote(dists)}",
f"--section={pipes.quote(components)}",
]
rflags = ["--nocleanup"]
for repo_yumoption in repo.yumopts:
if repo.yumopts[repo_yumoption]:
rflags.append(f"{repo_yumoption}={repo.yumopts[repo_yumoption]}")
else:
rflags.append(repo_yumoption)
cmd = [mirror_program] + rflags + mirror_data + [pipes.quote(dest_path)]
if repo.arch == RepoArchs.SRC:
cmd.append("--source")
else:
arch = repo.arch.value
if arch == "x86_64":
arch = "amd64" # FIX potential arch errors
cmd.append("--nosource")
cmd.append(f"-a={arch}")
# Set's an environment variable for subprocess, otherwise debmirror will fail as it needs this variable to
# exist.
# FIXME: might this break anything? So far it doesn't
os.putenv("HOME", "/var/lib/cobbler")
return_code = utils.subprocess_call(cmd, shell=False)
if return_code != 0:
raise CX("cobbler reposync failed")
def create_local_file(
self, dest_path: str, repo: "Repo", output: bool = True
) -> str:
"""
Creates Yum config files for use by reposync
Two uses:
(A) output=True, Create local files that can be used with yum on provisioned clients to make use of this mirror.
(B) output=False, Create a temporary file for yum to feed into yum for mirroring
:param dest_path: The destination path to create the file at.
:param repo: The repository object to create a file for.
:param output: See described above.
:return: The name of the file which was written.
"""
# The output case will generate repo configuration files which are usable for the installed systems. They need
# to be made compatible with --server-override which means they are actually templates, which need to be
# rendered by a Cobbler-sync on per profile/system basis.
if output:
fname = os.path.join(dest_path, "config.repo")
else:
fname = os.path.join(dest_path, f"{repo.name}.repo")
self.logger.debug("creating: %s", fname)
if not os.path.exists(dest_path):
filesystem_helpers.mkdir(dest_path)
with open(fname, "w", encoding="UTF-8") as config_file:
if not output:
config_file.write("[main]\nreposdir=/dev/null\n")
config_file.write(f"[{repo.name}]\n")
config_file.write(f"name={repo.name}\n")
optenabled = False
optgpgcheck = False
if output:
if repo.mirror_locally:
protocol = self.api.settings().autoinstall_scheme
line = f"baseurl={protocol}://${{http_server}}/cobbler/repo_mirror/{repo.name}\n"
else:
mstr = repo.mirror
if mstr.startswith("/"):
mstr = f"file://{mstr}"
line = f"{repo.mirror_type.value}={mstr}\n"
config_file.write(line)
# User may have options specific to certain yum plugins add them to the file
for repo_yumoption in repo.yumopts:
if repo_yumoption == "enabled":
optenabled = True
if repo_yumoption == "gpgcheck":
optgpgcheck = True
else:
mstr = repo.mirror
if mstr.startswith("/"):
mstr = f"file://{mstr}"
line = repo.mirror_type.value + f"={mstr}\n"
if self.settings.http_port not in (80, "80"):
http_server = f"{self.settings.server}:{self.settings.http_port}"
else:
http_server = self.settings.server
line = line.replace("@@server@@", http_server)
config_file.write(line)
config_proxy = None
if repo.proxy == "<<inherit>>":
config_proxy = self.settings.proxy_url_ext
elif repo.proxy not in ("", "<<None>>"):
config_proxy = repo.proxy
if config_proxy is not None:
config_file.write(f"proxy={config_proxy}\n")
if "exclude" in list(repo.yumopts.keys()):
self.logger.debug("excluding: %s", repo.yumopts["exclude"])
config_file.write(f"exclude={repo.yumopts['exclude']}\n")
if not optenabled:
config_file.write("enabled=1\n")
config_file.write(f"priority={repo.priority}\n")
# FIXME: potentially might want a way to turn this on/off on a per-repo basis
if not optgpgcheck:
config_file.write("gpgcheck=0\n")
# user may have options specific to certain yum plugins
# add them to the file
for repo_yumoption in repo.yumopts:
if not (
output and repo.mirror_locally and repo_yumoption.startswith("ssl")
):
config_file.write(
f"{repo_yumoption}={repo.yumopts[repo_yumoption]}\n"
)
return fname
# ==================================================================================
def update_permissions(self, repo_path: str) -> None:
"""
Verifies that permissions and contexts after an rsync are as expected.
Sending proper rsync flags should prevent the need for this, though this is largely a safeguard.
:param repo_path: The path to update the permissions of.
"""
# all_path = os.path.join(repo_path, "*")
owner = "root:apache"
(dist, _) = os_release()
if dist == "suse":
owner = "root:www"
elif dist in ("debian", "ubuntu"):
owner = "root:www-data"
cmd1 = ["chown", "-R", owner, repo_path]
utils.subprocess_call(cmd1, shell=False)
cmd2 = ["chmod", "-R", "755", repo_path]
utils.subprocess_call(cmd2, shell=False)
| 34,149
|
Python
|
.py
| 692
| 37.101156
| 120
| 0.556129
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,183
|
sync.py
|
cobbler_cobbler/cobbler/actions/sync.py
|
"""
Builds out filesystem trees/data based on the object tree.
This is the code behind 'cobbler sync'.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import glob
import logging
import os
import time
from typing import TYPE_CHECKING, Dict, List, Optional
from cobbler import templar, tftpgen, utils
from cobbler.cexceptions import CX
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.distro import Distro
from cobbler.items.image import Image
from cobbler.items.profile import Profile
from cobbler.items.system import System
from cobbler.modules.managers import (
DhcpManagerModule,
DnsManagerModule,
TftpManagerModule,
)
class CobblerSync:
"""
Handles conversion of internal state to the tftpboot tree layout
"""
def __init__(
self,
api: "CobblerAPI",
verbose: bool = True,
dhcp: Optional["DhcpManagerModule"] = None,
dns: Optional["DnsManagerModule"] = None,
tftpd: Optional["TftpManagerModule"] = None,
) -> None:
"""
Constructor
:param api: The API instance which holds all information about cobbler.
:param verbose: Whether to log the actions performed in this module verbose or not.
:param dhcp: The DHCP manager which can update the DHCP config.
:param dns: The DNS manager which can update the DNS config.
:param tftpd: The TFTP manager which can update the TFTP config.
"""
self.logger = logging.getLogger()
self.verbose = verbose
self.api = api
self.distros = api.distros()
self.profiles = api.profiles()
self.systems = api.systems()
self.images = api.images()
self.settings = api.settings()
self.repos = api.repos()
self.templar = templar.Templar(self.api)
self.tftpgen = tftpgen.TFTPGen(api)
if dns is None:
raise ValueError("dns not optional")
self.dns = dns
if dhcp is None:
raise ValueError("dns not optional")
self.dhcp = dhcp
if tftpd is None:
raise ValueError("dns not optional")
self.tftpd = tftpd
self.bootloc = self.settings.tftpboot_location
self.pxelinux_dir = os.path.join(self.bootloc, "pxelinux.cfg")
self.grub_dir = os.path.join(self.bootloc, "grub")
self.images_dir = os.path.join(self.bootloc, "images")
self.ipxe_dir = os.path.join(self.bootloc, "ipxe")
self.esxi_dir = os.path.join(self.bootloc, "esxi")
self.rendered_dir = os.path.join(self.settings.webdir, "rendered")
self.links = os.path.join(self.settings.webdir, "links")
self.distromirror_config = os.path.join(
self.settings.webdir, "distro_mirror/config"
)
filesystem_helpers.create_tftpboot_dirs(self.api)
filesystem_helpers.create_web_dirs(self.api)
def __common_run(self):
"""
Common startup code for the different sync algorithms
"""
if not os.path.exists(self.bootloc):
utils.die(f"cannot find directory: {self.bootloc}")
self.logger.info("running pre-sync triggers")
# run pre-triggers...
utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/sync/pre/*")
self.distros = self.api.distros()
self.profiles = self.api.profiles()
self.systems = self.api.systems()
self.settings = self.api.settings()
self.repos = self.api.repos()
def run_sync_systems(self, systems: List[str]):
"""
Syncs the specific systems with the config tree.
"""
self.__common_run()
# Have the tftpd module handle copying bootloaders, distros, images, and all_system_files
self.tftpd.sync_systems(systems)
if self.settings.manage_dhcp:
self.write_dhcp()
if self.settings.manage_dns:
self.logger.info("rendering DNS files")
self.dns.regen_hosts()
self.dns.write_configs()
self.logger.info("cleaning link caches")
self.clean_link_cache()
if self.settings.manage_rsync:
self.logger.info("rendering rsync files")
self.rsync_gen()
# run post-triggers
self.logger.info("running post-sync triggers")
utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/sync/post/*")
utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/change/*")
def run(self) -> None:
"""
Syncs the current configuration file with the config tree.
Using the ``Check().run_`` functions previously is recommended
"""
self.__common_run()
# execute the core of the sync operation
self.logger.info("cleaning trees")
self.clean_trees()
# Have the tftpd module handle copying bootloaders, distros, images, and all_system_files
self.tftpd.sync()
# Copy distros to the webdir
# Adding in the exception handling to not blow up if files have been moved (or the path references an NFS
# directory that's no longer mounted)
for distro in self.distros:
try:
self.logger.info("copying files for distro: %s", distro.name)
self.tftpgen.copy_single_distro_files(
distro, self.settings.webdir, True
)
self.tftpgen.write_templates(distro, write_file=True)
except CX as cobbler_exception:
self.logger.error(cobbler_exception.value)
if self.settings.manage_dhcp:
self.write_dhcp()
if self.settings.manage_dns:
self.logger.info("rendering DNS files")
self.dns.regen_hosts()
self.dns.write_configs()
if self.settings.manage_tftpd:
# copy in boot_files
self.tftpd.write_boot_files()
self.logger.info("cleaning link caches")
self.clean_link_cache()
if self.settings.manage_rsync:
self.logger.info("rendering Rsync files")
self.rsync_gen()
# run post-triggers
self.logger.info("running post-sync triggers")
utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/sync/post/*")
utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/change/*")
def clean_trees(self):
"""
Delete any previously built pxelinux.cfg tree and virt tree info and then create directories.
Note: for SELinux reasons, some information goes in ``/tftpboot``, some in ``/var/www/cobbler`` and some must be
duplicated in both. This is because PXE needs tftp, and automatic installation and Virt operations need http.
Only the kernel and initrd images are duplicated, which is unfortunate, though SELinux won't let me give them
two contexts, so symlinks are not a solution. *Otherwise* duplication is minimal.
"""
# clean out parts of webdir and all of /tftpboot/images and /tftpboot/pxelinux.cfg
for file_obj in os.listdir(self.settings.webdir):
path = os.path.join(self.settings.webdir, file_obj)
if os.path.isfile(path):
if not file_obj.endswith(".py"):
filesystem_helpers.rmfile(path)
if os.path.isdir(path):
if file_obj not in self.settings.webdir_whitelist:
# delete directories that shouldn't exist
filesystem_helpers.rmtree(path)
if file_obj in [
"templates",
"images",
"systems",
"distros",
"profiles",
"repo_profile",
"repo_system",
"rendered",
]:
# clean out directory contents
filesystem_helpers.rmtree_contents(path)
for file_obj in [
self.pxelinux_dir,
self.grub_dir,
self.images_dir,
self.ipxe_dir,
self.esxi_dir,
self.rendered_dir,
]:
filesystem_helpers.rmtree(file_obj)
filesystem_helpers.create_tftpboot_dirs(self.api)
def write_dhcp(self):
"""
Write all files which are associated to DHCP.
"""
self.logger.info("rendering DHCP files")
self.dhcp.write_configs()
self.dhcp.regen_ethers()
def sync_dhcp(self):
"""
This calls write_dhcp and restarts the DHCP server.
"""
if self.settings.manage_dhcp:
self.write_dhcp()
self.dhcp.sync_dhcp()
def clean_link_cache(self):
"""
All files which are linked into the cache will be deleted so the cache can be rebuild.
"""
for dirtree in [os.path.join(self.bootloc, "images"), self.settings.webdir]:
cachedir = os.path.join(dirtree, ".link_cache")
if os.path.isdir(cachedir):
cmd = [
"find",
cachedir,
"-maxdepth",
"1",
"-type",
"f",
"-links",
"1",
"-exec",
"rm",
"-f",
"{}",
";",
]
utils.subprocess_call(cmd, shell=False)
def rsync_gen(self) -> None:
"""
Generate rsync modules of all repositories and distributions
:raises OSError:
"""
template_file = "/etc/cobbler/rsync.template"
try:
with open(template_file, "r", encoding="UTF-8") as template:
template_data = template.read()
except Exception as error:
raise OSError(f"error reading template {template_file}") from error
distros: List[Dict[str, str]] = []
for link in glob.glob(os.path.join(self.settings.webdir, "links", "*")):
distro: Dict[str, str] = {}
distro["path"] = os.path.realpath(link)
distro["name"] = os.path.basename(link)
distros.append(distro)
repos = [
repo.name
for repo in self.api.repos()
if os.path.isdir(
os.path.join(self.settings.webdir, "repo_mirror", repo.name)
)
]
metadata = {
"date": time.asctime(time.gmtime()),
"cobbler_server": self.settings.server,
"distros": distros,
"repos": repos,
"webdir": self.settings.webdir,
}
self.templar.render(template_data, metadata, "/etc/rsyncd.conf")
def add_single_distro(self, distro_obj: "Distro") -> None:
"""
Sync adding a single distro.
:param name: The name of the distribution.
"""
# copy image files to images/$name in webdir & tftpboot:
self.tftpgen.copy_single_distro_files(distro_obj, self.settings.webdir, True)
self.tftpd.add_single_distro(distro_obj)
# create the symlink for this distro
src_dir = distro_obj.find_distro_path()
dst_dir = os.path.join(self.settings.webdir, "links", distro_obj.name)
if os.path.exists(dst_dir):
self.logger.warning("skipping symlink, destination (%s) exists", dst_dir)
elif (
filesystem_helpers.path_tail(
os.path.join(self.settings.webdir, "distro_mirror"), src_dir
)
== ""
):
self.logger.warning(
"skipping symlink, the source (%s) is not in %s",
src_dir,
os.path.join(self.settings.webdir, "distro_mirror"),
)
else:
try:
self.logger.info("trying symlink %s -> %s", src_dir, dst_dir)
os.symlink(src_dir, dst_dir)
except (IOError, OSError):
self.logger.error("symlink failed (%s -> %s)", src_dir, dst_dir)
# generate any templates listed in the distro
self.tftpgen.write_templates(distro_obj, write_file=True)
# cascade sync
kids = self.api.find_profile(return_list=True, distro=distro_obj.name)
if not isinstance(kids, list):
raise ValueError("Expected to get list of profiles from search!")
for k in kids:
self.add_single_profile(k, rebuild_menu=False)
self.tftpgen.make_pxe_menu()
def add_single_image(self, image_obj: "Image") -> None:
"""
Sync adding a single image.
:param name: The name of the image.
"""
self.tftpgen.copy_single_image_files(image_obj)
kids = self.api.find_system(return_list=True, image=image_obj.name)
if not isinstance(kids, list):
raise ValueError("Expected to get list of profiles from search!")
for k in kids:
self.add_single_system(k)
self.tftpgen.make_pxe_menu()
def remove_single_distro(self, distro_obj: "Distro") -> None:
"""
Sync removing a single distro.
:param name: The name of the distribution.
"""
bootloc = self.settings.tftpboot_location
# delete contents of images/$name directory in webdir
filesystem_helpers.rmtree(
os.path.join(self.settings.webdir, "images", distro_obj.name)
)
# delete contents of images/$name in tftpboot
filesystem_helpers.rmtree(os.path.join(bootloc, "images", distro_obj.name))
# delete potential symlink to tree in webdir/links
filesystem_helpers.rmfile(
os.path.join(self.settings.webdir, "links", distro_obj.name)
)
# delete potential distro config files
filesystem_helpers.rmglob_files(
os.path.join(self.settings.webdir, "distro_mirror", "config"),
distro_obj.name + "*.repo",
)
def remove_single_image(self, image_obj: "Image") -> None:
"""
Sync removing a single image.
:param image_obj: The name of the image.
"""
bootloc = self.settings.tftpboot_location
filesystem_helpers.rmfile(os.path.join(bootloc, "images2", image_obj.name))
def add_single_profile(
self, profile: "Profile", rebuild_menu: bool = True
) -> Optional[bool]:
"""
Sync adding a single profile.
:param name: The name of the profile.
:param rebuild_menu: Whether to rebuild the grub/... menu or not.
:return: ``True`` if this succeeded.
"""
if profile is None or isinstance(profile, list): # type: ignore
# Most likely a subprofile's kid has been removed already, though the object tree has not been reloaded and
# this is just noise.
return None
# Rebuild the yum configuration files for any attached repos generate any templates listed in the distro.
self.tftpgen.write_templates(profile)
# Cascade sync
kids = profile.children
for k in kids:
self.add_single_profile(k, rebuild_menu=False) # type: ignore
kids = self.api.find_system(return_list=True, profile=profile.name)
if not isinstance(kids, list):
raise ValueError("Expected to get list of profiles from search!")
for k in kids:
self.add_single_system(k)
if rebuild_menu:
self.tftpgen.make_pxe_menu()
return True
def remove_single_profile(
self, profile_obj: "Profile", rebuild_menu: bool = True
) -> None:
"""
Sync removing a single profile.
:param name: The name of the profile.
:param rebuild_menu: Whether to rebuild the grub/... menu or not.
"""
# delete profiles/$name file in webdir
filesystem_helpers.rmfile(
os.path.join(self.settings.webdir, "profiles", profile_obj.name)
)
# delete contents on autoinstalls/$name directory in webdir
filesystem_helpers.rmtree(
os.path.join(self.settings.webdir, "autoinstalls", profile_obj.name)
)
if rebuild_menu:
self.tftpgen.make_pxe_menu()
def update_system_netboot_status(self, name: str) -> None:
"""
Update the netboot status of a system.
:param name: The name of the system.
"""
system = self.systems.find(name=name)
if system is None or isinstance(system, list):
return
self.tftpd.sync_single_system(system)
def add_single_system(self, system_obj: "System") -> None:
"""
Sync adding a single system.
:param name: The name of the system.
"""
# rebuild system_list file in webdir
if self.settings.manage_dhcp:
self.dhcp.sync_single_system(system_obj)
if self.settings.manage_dns:
self.dns.add_single_hosts_entry(system_obj)
# write the PXE files for the system
self.tftpd.sync_single_system(system_obj)
def remove_single_system(self, system_obj: "System") -> None:
"""
Sync removing a single system.
:param name: The name of the system.
"""
bootloc = self.settings.tftpboot_location
# delete contents of autoinsts_sys/$name in webdir
for interface_name, _ in list(system_obj.interfaces.items()):
pxe_filename = system_obj.get_config_filename(
interface=interface_name, loader="pxe"
)
grub_filename = system_obj.get_config_filename(
interface=interface_name, loader="grub"
)
if pxe_filename is not None:
filesystem_helpers.rmfile(
os.path.join(bootloc, "pxelinux.cfg", pxe_filename)
)
if not (system_obj.name == "default" and grub_filename is None):
# A default system can't have GRUB entries and thus we want to skip this.
filesystem_helpers.rmfile(
os.path.join(bootloc, "grub", "system", grub_filename) # type: ignore
)
filesystem_helpers.rmfile(
os.path.join(bootloc, "grub", "system_link", system_obj.name)
)
if pxe_filename is not None:
filesystem_helpers.rmtree(os.path.join(bootloc, "esxi", pxe_filename))
if self.settings.manage_dhcp:
self.dhcp.remove_single_system(system_obj)
if self.settings.manage_dns:
self.dns.remove_single_hosts_entry(system_obj)
def remove_single_menu(self, rebuild_menu: bool = True) -> None:
"""
Sync removing a single menu.
:param rebuild_menu: Whether to rebuild the grub/... menu or not.
"""
if rebuild_menu:
self.tftpgen.make_pxe_menu()
| 19,216
|
Python
|
.py
| 449
| 32.120267
| 120
| 0.597262
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,184
|
importer.py
|
cobbler_cobbler/cobbler/actions/importer.py
|
"""
This module contains the logic that kicks of the ``cobbler import`` process. This is extracted logic from ``api.py``
that is essentially calling ``modules/mangers/import_signatures.py`` with some preparatory code.
"""
import logging
import os
from typing import TYPE_CHECKING, Optional
from cobbler import utils
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class Importer:
"""
Wrapper class to adhere to the style of all other actions.
"""
def __init__(self, api: "CobblerAPI") -> None:
"""
Constructor to initialize the class.
:param api: The CobblerAPI.
"""
self.api = api
self.logger = logging.getLogger()
def run(
self,
mirror_url: str,
mirror_name: str,
network_root: Optional[str] = None,
autoinstall_file: Optional[str] = None,
rsync_flags: Optional[str] = None,
arch: Optional[str] = None,
breed: Optional[str] = None,
os_version: Optional[str] = None,
) -> bool:
"""
Automatically import a directory tree full of distribution files.
:param mirror_url: Can be a string that represents a path, a user@host syntax for SSH, or an rsync:// address.
If mirror_url is a filesystem path and mirroring is not desired, set network_root to
something like "nfs://path/to/mirror_url/root"
:param mirror_name: The name of the mirror.
:param network_root: the remote path (nfs/http/ftp) for the distro files
:param autoinstall_file: user-specified response file, which will override the default
:param rsync_flags: Additional flags that will be passed to the rsync call that will sync everything to the
Cobbler webroot.
:param arch: user-specified architecture
:param breed: user-specified breed
:param os_version: user-specified OS version
"""
self.api.log(
"import_tree",
[mirror_url, mirror_name, network_root, autoinstall_file, rsync_flags],
)
# Both --path and --name are required arguments.
if mirror_url is None or not mirror_url: # type: ignore[reportUnnecessaryComparison]
self.logger.info("import failed. no --path specified")
return False
if not mirror_name:
self.logger.info("import failed. no --name specified")
return False
path = os.path.normpath(
f"{self.api.settings().webdir}/distro_mirror/{mirror_name}"
)
if arch is not None:
arch = arch.lower()
if arch == "x86":
# be consistent
arch = "i386"
if path.split("-")[-1] != arch:
path += f"-{arch}"
# We need to mirror (copy) the files.
self.logger.info(
"importing from a network location, running rsync to fetch the files first"
)
filesystem_helpers.mkdir(path)
# Prevent rsync from creating the directory name twice if we are copying via rsync.
if not mirror_url.endswith("/"):
mirror_url = f"{mirror_url}/"
if (
mirror_url.startswith("http://")
or mirror_url.startswith("https://")
or mirror_url.startswith("ftp://")
or mirror_url.startswith("nfs://")
):
# HTTP mirrors are kind of primitive. rsync is better. That's why this isn't documented in the manpage and
# we don't support them.
# TODO: how about adding recursive FTP as an option?
self.logger.info("unsupported protocol")
return False
# Good, we're going to use rsync.. We don't use SSH for public mirrors and local files.
# Presence of user@host syntax means use SSH
spacer = ""
if not mirror_url.startswith("rsync://") and not mirror_url.startswith("/"):
spacer = ' -e "ssh" '
# --archive but without -p to avoid copying read-only ISO permissions and making sure we have write access
rsync_cmd = ["rsync", "-rltgoD", "--chmod=ug=rwX"]
if spacer != "":
rsync_cmd.append(spacer)
rsync_cmd.append("--progress")
if rsync_flags:
rsync_cmd.append(rsync_flags)
# If --available-as was specified, limit the files we pull down via rsync to just those that are critical
# to detecting what the distro is
if network_root is not None:
rsync_cmd.append("--include-from=/etc/cobbler/import_rsync_whitelist")
rsync_cmd += [mirror_url, path]
# kick off the rsync now
rsync_return_code = utils.subprocess_call(rsync_cmd, shell=False)
if rsync_return_code != 0:
raise RuntimeError(
f"rsync import failed with return code {rsync_return_code}!"
)
if network_root is not None:
# In addition to mirroring, we're going to assume the path is available over http, ftp, and nfs, perhaps on
# an external filer. Scanning still requires --mirror is a filesystem path, but --available-as marks the
# network path. This allows users to point the path at a directory containing just the network boot files
# while the rest of the distro files are available somewhere else.
# Find the filesystem part of the path, after the server bits, as each distro URL needs to be calculated
# relative to this.
if not network_root.endswith("/"):
network_root += "/"
valid_roots = ["nfs://", "ftp://", "http://", "https://"]
for valid_root in valid_roots:
if network_root.startswith(valid_root):
break
else:
self.logger.info(
"Network root given to --available-as must be nfs://, ftp://, http://, or https://"
)
return False
if network_root.startswith("nfs://"):
try:
(_, _, _) = network_root.split(":", 3)
except ValueError:
self.logger.info(
"Network root given to --available-as is missing a colon, please see the manpage example."
)
return False
import_module = self.api.get_module_by_name("managers.import_signatures")
if import_module is None:
raise ImportError("Could not retrieve import signatures module!")
import_manager = import_module.get_import_manager(self.api)
import_manager.run(
path, mirror_name, network_root, autoinstall_file, arch, breed, os_version
)
return True
| 6,880
|
Python
|
.py
| 144
| 36.673611
| 119
| 0.59857
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,185
|
check.py
|
cobbler_cobbler/cobbler/actions/check.py
|
"""
Cobbler Trigger Module that checks against a list of hardcoded potential common errors in a Cobbler installation.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import glob
import logging
import os
import re
from typing import TYPE_CHECKING, List
from xmlrpc.client import ServerProxy
from cobbler import utils
from cobbler.utils import process_management
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class CobblerCheck:
"""
Validates whether the system is reasonably well configured for
serving up content. This is the code behind 'cobbler check'.
"""
def __init__(self, api: "CobblerAPI") -> None:
"""
Constructor
:param api: The API which holds all information.
"""
self.api = api
self.settings = api.settings()
self.logger = logging.getLogger()
self.checked_family = ""
def run(self) -> List[str]:
"""
The CLI usage is "cobbler check" before "cobbler sync".
:return: None if there are no errors, otherwise returns a list of things to correct prior to running application
'for real'.
"""
status: List[str] = []
self.checked_family = utils.get_family()
self.check_name(status)
self.check_selinux(status)
if self.settings.manage_dhcp:
mode = self.api.get_sync().dhcp.what()
if mode == "isc":
self.check_dhcpd_bin(status)
self.check_dhcpd_conf(status)
self.check_service(status, "dhcpd")
elif mode == "dnsmasq":
self.check_dnsmasq_bin(status)
self.check_service(status, "dnsmasq")
if self.settings.manage_dns:
mode = self.api.get_sync().dns.what()
if mode == "bind":
self.check_bind_bin(status)
self.check_service(status, "named")
elif mode == "dnsmasq" and not self.settings.manage_dhcp:
self.check_dnsmasq_bin(status)
self.check_service(status, "dnsmasq")
mode = self.api.get_sync().tftpd.what()
if mode == "in_tftpd":
self.check_tftpd_dir(status)
elif mode == "tftpd_py":
self.check_ctftpd_dir(status)
self.check_service(status, "cobblerd")
self.check_bootloaders(status)
self.check_for_wget_curl(status)
self.check_rsync_conf(status)
self.check_iptables(status)
self.check_yum(status)
self.check_debmirror(status)
self.check_for_ksvalidator(status)
self.check_for_default_password(status)
self.check_for_unreferenced_repos(status)
self.check_for_unsynced_repos(status)
self.check_for_cman(status)
return status
def check_for_ksvalidator(self, status: List[str]) -> None:
"""
Check if the ``ksvalidator`` is present in ``/usr/bin``.
:param status: The status list with possible problems. The status list with possible problems.
"""
# FIXME: This tools is cross-platform via Python. Thus all distros can have it.
# ubuntu also identifies as "debian"
if self.checked_family in ["debian", "suse"]:
return
if not os.path.exists("/usr/bin/ksvalidator"):
status.append("ksvalidator was not found, install pykickstart")
@staticmethod
def check_for_cman(status: List[str]) -> None:
"""
Check if the fence agents are available. This is done through checking if the binary ``fence_ilo`` is present
in ``/sbin`` or ``/usr/sbin``.
:param status: The status list with possible problems. The status list with possible problems.
"""
# not doing rpm -q here to be cross-distro friendly
if not os.path.exists("/sbin/fence_ilo") and not os.path.exists(
"/usr/sbin/fence_ilo"
):
status.append(
"fencing tools were not found, and are required to use the (optional) power management "
"features. install cman or fence-agents to use them"
)
def check_service(self, status: List[str], which: str, notes: str = "") -> None:
"""
Check if the service command is available or the old init.d system has to be used.
:param status: The status list with possible problems.
:param which: The service to check for.
:param notes: A manual not to attach.
"""
if notes != "":
notes = f" (NOTE: {notes})"
return_code = 0
if process_management.is_supervisord():
with ServerProxy("http://localhost:9001/RPC2") as server:
process_info = server.supervisor.getProcessInfo(which)
if (
isinstance(process_info, dict)
and process_info["statename"] != "RUNNING"
):
status.append(f"service {which} is not running{notes}")
return
elif process_management.is_systemd():
return_code = utils.subprocess_call(
["systemctl", "status", which], shell=False
)
if return_code != 0:
status.append(f'service "{which}" is not running{notes}')
return
elif self.checked_family in ("redhat", "suse"):
if os.path.exists(f"/etc/rc.d/init.d/{which}"):
return_code = utils.subprocess_call(
["/sbin/service", which, "status"], shell=False
)
if return_code != 0:
status.append(f"service {which} is not running{notes}")
return
elif self.checked_family == "debian":
# we still use /etc/init.d
if os.path.exists(f"/etc/init.d/{which}"):
return_code = utils.subprocess_call(
[f"/etc/init.d/{which}", "status"], shell=False
)
if return_code != 0:
status.append(f"service {which} is not running{notes}")
return
else:
status.append(
f"Unknown distribution type, cannot check for running service {which}"
)
return
def check_iptables(self, status: List[str]) -> None:
"""
Check if iptables is running. If yes print the needed ports. This is unavailable on Debian, SUSE and CentOS7 as
a service. However this only indicates that the way of persisting the iptable rules are persisted via other
means.
:param status: The status list with possible problems.
"""
# TODO: Rewrite check to be able to verify this is in more cases
if os.path.exists("/etc/rc.d/init.d/iptables"):
return_code = utils.subprocess_call(
["/sbin/service", "iptables", "status"], shell=False
)
if return_code == 0:
status.append(
f"since iptables may be running, ensure 69, 80/443, and {self.settings.xmlrpc_port} are unblocked"
)
def check_yum(self, status: List[str]) -> None:
"""
Check if the yum-stack is available. On Debian based distros this will always return without checking.
:param status: The status list with possible problems.
"""
# FIXME: Replace this with calls which check for the path of these tools.
if self.checked_family == "debian":
return
if not os.path.exists("/usr/bin/createrepo"):
status.append(
"createrepo package is not installed, needed for cobbler import and cobbler reposync, "
"install createrepo?"
)
if not os.path.exists("/usr/bin/dnf") and not os.path.exists(
"/usr/bin/reposync"
):
status.append("reposync not installed, install yum-utils")
if os.path.exists("/usr/bin/dnf") and not os.path.exists("/usr/bin/reposync"):
status.append(
"reposync is not installed, install yum-utils or dnf-plugins-core"
)
if not os.path.exists("/usr/bin/dnf") and not os.path.exists(
"/usr/bin/yumdownloader"
):
status.append("yumdownloader is not installed, install yum-utils")
if os.path.exists("/usr/bin/dnf") and not os.path.exists(
"/usr/bin/yumdownloader"
):
status.append(
"yumdownloader is not installed, install yum-utils or dnf-plugins-core"
)
def check_debmirror(self, status: List[str]) -> None:
"""
Check if debmirror is available and the config file for it exists. If the distro family is suse then this will
pass without checking.
:param status: The status list with possible problems.
"""
if self.checked_family == "suse":
return
if not os.path.exists("/usr/bin/debmirror"):
status.append(
"debmirror package is not installed, it will be required to manage debian deployments and "
"repositories"
)
if os.path.exists("/etc/debmirror.conf"):
with open("/etc/debmirror.conf", encoding="UTF-8") as debmirror_fd:
re_dists = re.compile(r"@dists=")
re_arches = re.compile(r"@arches=")
for line in debmirror_fd.readlines():
if re_dists.search(line) and not line.strip().startswith("#"):
status.append(
"comment out 'dists' on /etc/debmirror.conf for proper debian support"
)
if re_arches.search(line) and not line.strip().startswith("#"):
status.append(
"comment out 'arches' on /etc/debmirror.conf for proper debian support"
)
def check_name(self, status: List[str]) -> None:
"""
If the server name in the config file is still set to localhost automatic installations run from koan will not
have proper kernel line parameters.
:param status: The status list with possible problems.
"""
if self.settings.server == "127.0.0.1":
status.append(
"The 'server' field in /etc/cobbler/settings.yaml must be set to something other than localhost, "
"or automatic installation features will not work. This should be a resolvable hostname or "
"IP for the boot server as reachable by all machines that will use it."
)
if self.settings.next_server_v4 == "127.0.0.1":
status.append(
"For PXE to be functional, the 'next_server_v4' field in /etc/cobbler/settings.yaml must be set to "
"something other than 127.0.0.1, and should match the IP of the boot server on the PXE "
"network."
)
if self.settings.next_server_v6 == "::1":
status.append(
"For PXE to be functional, the 'next_server_v6' field in /etc/cobbler/settings.yaml must be set to "
"something other than ::1, and should match the IP of the boot server on the PXE network."
)
def check_selinux(self, status: List[str]) -> None:
"""
Suggests various SELinux rules changes to run Cobbler happily with SELinux in enforcing mode.
:param status: The status list with possible problems.
"""
# FIXME: this method could use some refactoring in the future.
if self.checked_family == "debian":
return
enabled = self.api.is_selinux_enabled()
if enabled:
status.append(
"SELinux is enabled. Please review the following wiki page for details on ensuring Cobbler "
"works correctly in your SELinux environment:\n "
"https://cobbler.readthedocs.io/en/latest/user-guide/selinux.html"
)
def check_for_default_password(self, status: List[str]) -> None:
"""
Check if the default password of Cobbler was changed.
:param status: The status list with possible problems.
"""
default_pass = self.settings.default_password_crypted
if default_pass == "$1$mF86/UHC$WvcIcX2t6crBz2onWxyac.":
status.append(
"The default password used by the sample templates for newly installed machines ("
"default_password_crypted in /etc/cobbler/settings.yaml) is still set to 'cobbler' and should be "
"changed, try: \"openssl passwd -1 -salt 'random-phrase-here' 'your-password-here'\" to "
"generate new one"
)
def check_for_unreferenced_repos(self, status: List[str]) -> None:
"""
Check if there are repositories which are not used and thus could be removed.
:param status: The status list with possible problems.
"""
repos: List[str] = []
referenced: List[str] = []
not_found: List[str] = []
for repo in self.api.repos():
repos.append(repo.name)
for profile in self.api.profiles():
my_repos = profile.repos
if my_repos != "<<inherit>>":
referenced.extend(my_repos)
for repo in referenced:
if repo not in repos and repo != "<<inherit>>":
not_found.append(repo)
if len(not_found) > 0:
status.append(
"One or more repos referenced by profile objects is no longer defined in Cobbler:"
f" {', '.join(not_found)}"
)
def check_for_unsynced_repos(self, status: List[str]) -> None:
"""
Check if there are unsynchronized repositories which need an update.
:param status: The status list with possible problems.
"""
need_sync: List[str] = []
for repo in self.api.repos():
if repo.mirror_locally is True:
lookfor = os.path.join(self.settings.webdir, "repo_mirror", repo.name)
if not os.path.exists(lookfor):
need_sync.append(repo.name)
if len(need_sync) > 0:
status.append(
"One or more repos need to be processed by cobbler reposync for the first time before "
f"automating installations using them: {', '.join(need_sync)}"
)
@staticmethod
def check_dhcpd_bin(status: List[str]) -> None:
"""
Check if dhcpd is installed.
:param status: The status list with possible problems.
"""
if not os.path.exists("/usr/sbin/dhcpd"):
status.append("dhcpd is not installed")
@staticmethod
def check_dnsmasq_bin(status: List[str]) -> None:
"""
Check if dnsmasq is installed.
:param status: The status list with possible problems.
"""
return_code = utils.subprocess_call(["dnsmasq", "--help"], shell=False)
if return_code != 0:
status.append("dnsmasq is not installed and/or in path")
@staticmethod
def check_bind_bin(status: List[str]) -> None:
"""
Check if bind is installed.
:param status: The status list with possible problems.
"""
return_code = utils.subprocess_call(["named", "-v"], shell=False)
# it should return something like "BIND 9.6.1-P1-RedHat-9.6.1-6.P1.fc11"
if return_code != 0:
status.append("named is not installed and/or in path")
@staticmethod
def check_for_wget_curl(status: List[str]) -> None:
"""
Check to make sure wget or curl is installed
:param status: The status list with possible problems.
"""
rc_wget = utils.subprocess_call(["wget", "--help"], shell=False)
rc_curl = utils.subprocess_call(["curl", "--help"], shell=False)
if rc_wget != 0 and rc_curl != 0:
status.append(
"Neither wget nor curl are installed and/or available in $PATH. Cobbler requires that one "
"of these utilities be installed."
)
@staticmethod
def check_bootloaders(status: List[str]) -> None:
"""
Check if network bootloaders are installed
:param status: The status list with possible problems.
"""
# FIXME: move zpxe.rexx to loaders
bootloaders = {
"menu.c32": [
"/usr/share/syslinux/menu.c32",
"/usr/lib/syslinux/menu.c32",
"/var/lib/cobbler/loaders/menu.c32",
],
"pxelinux.0": [
"/usr/share/syslinux/pxelinux.0",
"/usr/lib/syslinux/pxelinux.0",
"/var/lib/cobbler/loaders/pxelinux.0",
],
"efi": [
"/var/lib/cobbler/loaders/grub-x86.efi",
"/var/lib/cobbler/loaders/grub-x86_64.efi",
],
}
# look for bootloaders at the glob locations above
found_bootloaders: List[str] = []
items = list(bootloaders.keys())
for loader_name in items:
patterns = bootloaders[loader_name]
for pattern in patterns:
matches = glob.glob(pattern)
if len(matches) > 0:
found_bootloaders.append(loader_name)
not_found: List[str] = []
# invert the list of what we've found so we can report on what we haven't found
for loader_name in items:
if loader_name not in found_bootloaders:
not_found.append(loader_name)
if len(not_found) > 0:
status.append(
"some network boot-loaders are missing from /var/lib/cobbler/loaders. If you only want to "
"handle x86/x86_64 netbooting, you may ensure that you have installed a *recent* version "
"of the syslinux package installed and can ignore this message entirely. Files in this "
"directory, should you want to support all architectures, should include pxelinux.0, and"
"menu.c32."
)
def check_tftpd_dir(self, status: List[str]) -> None:
"""
Check if cobbler.conf's tftpboot directory exists
:param status: The status list with possible problems.
"""
if self.checked_family == "debian":
return
bootloc = self.settings.tftpboot_location
if not os.path.exists(bootloc):
status.append(f"please create directory: {bootloc}")
def check_ctftpd_dir(self, status: List[str]) -> None:
"""
Check if ``cobbler.conf``'s tftpboot directory exists.
:param status: The status list with possible problems.
"""
if self.checked_family == "debian":
return
bootloc = self.settings.tftpboot_location
if not os.path.exists(bootloc):
status.append(f"please create directory: {bootloc}")
def check_rsync_conf(self, status: List[str]) -> None:
"""
Check that rsync is enabled to autostart.
:param status: The status list with possible problems.
"""
if self.checked_family == "debian":
return
if os.path.exists("/usr/lib/systemd/system/rsyncd.service"):
if not os.path.exists(
"/etc/systemd/system/multi-user.target.wants/rsyncd.service"
):
status.append("enable and start rsyncd.service with systemctl")
def check_dhcpd_conf(self, status: List[str]) -> None:
"""
NOTE: this code only applies if Cobbler is *NOT* set to generate a ``dhcp.conf`` file.
Check that dhcpd *appears* to be configured for pxe booting. We can't assure file correctness. Since a Cobbler
user might have dhcp on another server, it's okay if it's not there and/or not configured correctly according
to automated scans.
:param status: The status list with possible problems.
"""
if self.settings.manage_dhcp:
return
if os.path.exists(self.settings.dhcpd_conf):
match_next = False
match_file = False
with open(self.settings.dhcpd_conf, encoding="UTF-8") as dhcpd_conf_fd:
for line in dhcpd_conf_fd.readlines():
if line.find("next-server") != -1:
match_next = True
if line.find("filename") != -1:
match_file = True
if not match_next:
status.append(
f"expecting next-server entry in {self.settings.dhcpd_conf}"
)
if not match_file:
status.append(f"missing file: {self.settings.dhcpd_conf}")
else:
status.append(f"missing file: {self.settings.dhcpd_conf}")
| 21,206
|
Python
|
.py
| 455
| 35.002198
| 120
| 0.586827
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,186
|
status.py
|
cobbler_cobbler/cobbler/actions/status.py
|
"""
Reports on automatic installation activity by examining the logs in
/var/log/cobbler.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import glob
import gzip
import re
import time
from typing import TYPE_CHECKING, Any, Dict, List, Union
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class InstallStatus:
"""
Helper class that represents the current state of the installation of a system or profile.
"""
def __init__(self) -> None:
"""
Default constructor.
"""
self.most_recent_start = -1.0
self.most_recent_stop = -1.0
self.most_recent_target = ""
self.seen_start = -1.0
self.seen_stop = -1.0
self.state = "?"
def __eq__(self, other: Any) -> bool:
"""
Equality function that overrides the default behavior.
:param other: Other object.
:returns: True in case object is of the same type and all attributes are identical. False otherwise.
"""
if isinstance(other, InstallStatus):
return (
self.most_recent_start == other.most_recent_start
and self.most_recent_stop == other.most_recent_stop
and self.most_recent_target == other.most_recent_target
and self.seen_start == other.seen_start
and self.seen_stop == other.seen_stop
and self.state == other.state
)
return False
class CobblerStatusReport:
"""
TODO
"""
def __init__(self, api: "CobblerAPI", mode: str) -> None:
"""
Constructor
:param api: The API which holds all information.
:param mode: This describes how Cobbler should report. Currently, there only the option ``text`` can be set
explicitly.
"""
self.settings = api.settings()
self.ip_data: Dict[str, InstallStatus] = {}
self.mode = mode
@staticmethod
def collect_logfiles() -> List[str]:
"""
Collects all installation logfiles from ``/var/log/cobbler/``. This will also collect gzipped logfiles.
:returns: List of absolute paths that are matching the filepattern ``install.log`` or ``install.log.x``, where
x is a number equal or greater than zero.
"""
unsorted_files = glob.glob("/var/log/cobbler/install.log*")
files_dict: Dict[int, str] = {}
log_id_re = re.compile(r"install.log.(\d+)")
for fname in unsorted_files:
id_match = log_id_re.search(fname)
if id_match:
files_dict[int(id_match.group(1))] = fname
files: List[str] = []
sorted_ids = sorted(files_dict.keys(), reverse=True)
for file_id in sorted_ids:
files.append(files_dict[file_id])
if "/var/log/cobbler/install.log" in unsorted_files:
files.append("/var/log/cobbler/install.log")
return files
def scan_logfiles(self) -> None:
"""
Scan the installation log-files - starting with the oldest file.
"""
for fname in self.collect_logfiles():
if fname.endswith(".gz"):
logile_fd = gzip.open(fname, "rt")
else:
logile_fd = open(fname, "rt", encoding="UTF-8")
data = logile_fd.read()
for line in data.split("\n"):
tokens = line.split()
if len(tokens) == 0:
continue
(profile_or_system, name, ip_address, start_or_stop, timestamp) = tokens
self.catalog(
profile_or_system, name, ip_address, start_or_stop, float(timestamp)
)
logile_fd.close()
def catalog(
self,
profile_or_system: str,
name: str,
ip_address: str,
start_or_stop: str,
timestamp: float,
) -> None:
"""
Add a system to ``cobbler status``.
:param profile_or_system: This can be ``system`` or ``profile``.
:param name: The name of the object.
:param ip_address: The ip of the system to watch.
:param start_or_stop: This parameter may be ``start`` or ``stop``
:param timestamp: Timestamp as returned by ``time.time()``
"""
if ip_address not in self.ip_data:
self.ip_data[ip_address] = InstallStatus()
elem = self.ip_data[ip_address]
timestamp = float(timestamp)
mrstart = elem.most_recent_start
mrstop = elem.most_recent_stop
mrtarg = elem.most_recent_target
if start_or_stop == "start":
if mrstart < timestamp:
mrstart = timestamp
mrtarg = f"{profile_or_system}:{name}"
elem.seen_start += 1
if start_or_stop == "stop":
if mrstop < timestamp:
mrstop = timestamp
mrtarg = f"{profile_or_system}:{name}"
elem.seen_stop += 1
elem.most_recent_start = mrstart
elem.most_recent_stop = mrstop
elem.most_recent_target = mrtarg
def process_results(self) -> Dict[Any, Any]:
"""
Look through all systems which were collected and update the status.
:return: Return ``ip_data`` of the object.
"""
# FIXME: this should update the times here
tnow = int(time.time())
for _, elem in self.ip_data.items():
start = int(elem.most_recent_start)
stop = int(elem.most_recent_stop)
if stop > start:
elem.state = "finished"
else:
delta = tnow - start
minutes = delta // 60
seconds = delta % 60
if minutes > 100:
elem.state = "unknown/stalled"
else:
elem.state = f"installing ({minutes}m {seconds}s)"
return self.ip_data
def get_printable_results(self) -> str:
"""
Convert the status of Cobbler from a machine-readable form to human-readable.
:return: A nice formatted representation of the results of ``cobbler status``.
"""
printable_status_format = "%-15s|%-20s|%-17s|%-17s"
ip_addresses = list(self.ip_data.keys())
ip_addresses.sort()
line = (
"ip",
"target",
"start",
"state",
)
buf = printable_status_format % line
for ip_address in ip_addresses:
elem = self.ip_data[ip_address]
if elem.most_recent_start > -1:
start = time.ctime(elem.most_recent_start)
else:
start = "Unknown"
line = (ip_address, elem.most_recent_target, start, elem.state)
buf += "\n" + printable_status_format % line
return buf
def run(self) -> Union[Dict[Any, Any], str]:
"""
Calculate and print a automatic installation status report.
"""
self.scan_logfiles()
results = self.process_results()
if self.mode == "text":
return self.get_printable_results()
return results
| 7,342
|
Python
|
.py
| 188
| 28.75
| 118
| 0.566395
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,187
|
mkloaders.py
|
cobbler_cobbler/cobbler/actions/mkloaders.py
|
"""Cobbler action to create bootable Grub2 images.
This action calls grub2-mkimage for all bootloader formats configured in
Cobbler's settings. See man(1) grub2-mkimage for available formats.
"""
import logging
import pathlib
import re
import subprocess
import sys
import typing
from cobbler import utils
if typing.TYPE_CHECKING:
from cobbler.api import CobblerAPI
# NOTE: does not warrant being a class, but all Cobbler actions use a class's ".run()" as the entrypoint
class MkLoaders:
"""
Action to create bootloader images.
"""
def __init__(self, api: "CobblerAPI") -> None:
"""
MkLoaders constructor.
:param api: CobblerAPI instance for accessing settings
"""
self.logger = logging.getLogger()
self.bootloaders_dir = pathlib.Path(api.settings().bootloaders_dir)
# GRUB 2
self.grub2_mod_dir = pathlib.Path(api.settings().grub2_mod_dir)
self.boot_loaders_formats: typing.Dict[
typing.Any, typing.Any
] = api.settings().bootloaders_formats
self.modules: typing.List[str] = api.settings().bootloaders_modules
# UEFI GRUB
self.secure_boot_grub_path_glob = pathlib.Path(
api.settings().secure_boot_grub_folder
)
self.secure_boot_grub_regex = re.compile(api.settings().secure_boot_grub_file)
# Syslinux
self.syslinux_folder = pathlib.Path(api.settings().syslinux_dir)
self.syslinux_memdisk_folder = pathlib.Path(
api.settings().syslinux_memdisk_folder
)
self.syslinux_pxelinux_folder = pathlib.Path(
api.settings().syslinux_pxelinux_folder
)
# Shim
self.shim_glob = pathlib.Path(api.settings().bootloaders_shim_folder)
self.shim_regex = re.compile(api.settings().bootloaders_shim_file)
# iPXE
self.ipxe_folder = pathlib.Path(api.settings().bootloaders_ipxe_folder)
def run(self) -> None:
"""
Run GrubImages action. If the files or executables for the bootloader is not available we bail out and skip the
creation after it is logged that this is not available.
"""
self.create_directories()
self.make_shim()
self.make_ipxe()
self.make_syslinux()
self.make_grub()
def make_shim(self) -> None:
"""
Create symlink of the shim bootloader in case it is available on the system.
"""
target_shim = find_file(self.shim_glob, self.shim_regex)
if target_shim is None:
self.logger.info(
'Unable to find "shim.efi" file. Please adjust "bootloaders_shim_file" regex. Bailing out '
"of linking the shim!"
)
return
# Symlink the absolute target of the match
symlink(
target_shim,
self.bootloaders_dir.joinpath(pathlib.Path("grub/shim.efi")),
skip_existing=True,
)
def make_ipxe(self) -> None:
"""
Create symlink of the iPXE bootloader in case it is available on the system.
"""
if not self.ipxe_folder.exists():
self.logger.info(
'ipxe directory did not exist. Please adjust the "bootloaders_ipxe_folder". Bailing out '
"of iPXE setup!"
)
return
symlink(
self.ipxe_folder.joinpath("undionly.kpxe"),
self.bootloaders_dir.joinpath(pathlib.Path("undionly.pxe")),
skip_existing=True,
)
def make_syslinux(self) -> None:
"""
Create symlink of the important syslinux bootloader files in case they are available on the system.
"""
if not utils.command_existing("syslinux"):
self.logger.info(
"syslinux command not available. Bailing out of syslinux setup!"
)
return
syslinux_version = get_syslinux_version()
# Make modules
symlink(
self.syslinux_folder.joinpath("menu.c32"),
self.bootloaders_dir.joinpath("menu.c32"),
skip_existing=True,
)
# According to https://wiki.syslinux.org/wiki/index.php?title=Library_modules,
# 'menu.c32' depends on 'libutil.c32'.
libutil_c32_path = self.syslinux_folder.joinpath("libutil.c32")
if syslinux_version > 4 and libutil_c32_path.exists():
symlink(
libutil_c32_path,
self.bootloaders_dir.joinpath("libutil.c32"),
skip_existing=True,
)
if syslinux_version < 5:
# This file is only required for Syslinux 5 and newer.
# Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules
self.logger.info(
'syslinux version 4 detected! Skip making symlink of "ldlinux.c32" file!'
)
else:
symlink(
self.syslinux_folder.joinpath("ldlinux.c32"),
self.bootloaders_dir.joinpath("ldlinux.c32"),
skip_existing=True,
)
# Make memdisk
symlink(
self.syslinux_memdisk_folder.joinpath("memdisk"),
self.bootloaders_dir.joinpath("memdisk"),
skip_existing=True,
)
# Make pxelinux.0
symlink(
self.syslinux_pxelinux_folder.joinpath("pxelinux.0"),
self.bootloaders_dir.joinpath("pxelinux.0"),
skip_existing=True,
)
# Make linux.c32 for syslinux + wimboot
libcom32_c32_path = self.syslinux_folder.joinpath("libcom32.c32")
if syslinux_version > 4 and libcom32_c32_path.exists():
symlink(
self.syslinux_folder.joinpath("linux.c32"),
self.bootloaders_dir.joinpath("linux.c32"),
skip_existing=True,
)
# Make libcom32.c32
# 'linux.c32' depends on 'libcom32.c32'
symlink(
self.syslinux_folder.joinpath("libcom32.c32"),
self.bootloaders_dir.joinpath("libcom32.c32"),
skip_existing=True,
)
def make_grub(self) -> None:
"""
Create symlink of the GRUB 2 bootloader in case it is available on the system. Additionally build the loaders
for other architectures if the modules to do so are available.
"""
if not utils.command_existing("grub2-mkimage"):
self.logger.info(
"grub2-mkimage command not available. Bailing out of GRUB2 generation!"
)
return
for image_format, options in self.boot_loaders_formats.items():
secure_boot = options.get("use_secure_boot_grub", None)
if secure_boot:
binary_name = options["binary_name"]
target_grub = find_file(
self.secure_boot_grub_path_glob, self.secure_boot_grub_regex
)
if not target_grub:
self.logger.info(
(
"Could not find secure bootloader in the provided location.",
'Skipping linking secure bootloader for "%s".',
),
image_format,
)
continue
symlink(
target_grub,
self.bootloaders_dir.joinpath("grub", binary_name),
skip_existing=True,
)
self.logger.info(
'Successfully copied secure bootloader for arch "%s"!', image_format
)
continue
bl_mod_dir = options.get("mod_dir", image_format)
mod_dir = self.grub2_mod_dir.joinpath(bl_mod_dir)
if not mod_dir.exists():
self.logger.info(
'GRUB2 modules directory for arch "%s" did no exist. Skipping GRUB2 creation',
image_format,
)
continue
try:
mkimage(
image_format,
self.bootloaders_dir.joinpath("grub", options["binary_name"]),
self.modules + options.get("extra_modules", []),
)
except subprocess.CalledProcessError:
self.logger.info(
'grub2-mkimage failed for arch "%s"! Maybe you did forget to install the grub modules '
"for the architecture?",
image_format,
)
utils.log_exc()
# don't create module symlinks if grub2-mkimage is unsuccessful
continue
self.logger.info(
'Successfully built bootloader for arch "%s"!', image_format
)
# Create a symlink for GRUB 2 modules
# assumes a single GRUB can be used to boot all kinds of distros
# if this assumption turns out incorrect, individual "grub" subdirectories are needed
symlink(
mod_dir,
self.bootloaders_dir.joinpath("grub", bl_mod_dir),
skip_existing=True,
)
def create_directories(self) -> None:
"""
Create the required directories so that this succeeds. If existing, do nothing. This should create the tree for
all supported bootloaders, regardless of the capabilities to symlink/install/build them.
"""
if not self.bootloaders_dir.exists():
raise FileNotFoundError(
"Main bootloader directory not found! Please create it yourself!"
)
grub_dir = self.bootloaders_dir.joinpath("grub")
if not grub_dir.exists():
grub_dir.mkdir(mode=0o644)
# NOTE: move this to cobbler.utils?
# cobbler.utils.linkfile does a lot of things, it might be worth it to have a
# function just for symbolic links
def symlink(
target: pathlib.Path, link: pathlib.Path, skip_existing: bool = False
) -> None:
"""Create a symlink LINK pointing to TARGET.
:param target: File/directory that the link will point to. The file/directory must exist.
:param link: Filename for the link.
:param skip_existing: Controls if existing links are skipped, defaults to False.
:raises FileNotFoundError: ``target`` is not an existing file.
:raises FileExistsError: ``skip_existing`` is False and ``link`` already exists.
"""
if not target.exists():
raise FileNotFoundError(
f"{target} does not exist, can't create a symlink to it."
)
try:
link.symlink_to(target)
except FileExistsError:
if not skip_existing:
raise
def mkimage(
image_format: str, image_filename: pathlib.Path, modules: typing.List[str]
) -> None:
"""Create a bootable image of GRUB using grub2-mkimage.
:param image_format: Format of the image that is being created. See man(1)
grub2-mkimage for a list of supported formats.
:param image_filename: Location of the image that is being created.
:param modules: List of GRUB modules to include into the image
:raises subprocess.CalledProcessError: Error raised by ``subprocess.run``.
"""
if not image_filename.parent.exists():
image_filename.parent.mkdir(parents=True)
cmd = ["grub2-mkimage"]
cmd.extend(("--format", image_format))
cmd.extend(("--output", str(image_filename)))
cmd.append("--prefix=")
cmd.extend(modules)
# The Exception raised by subprocess already contains everything useful, it's simpler to use that than roll our
# own custom exception together with cobbler.utils.subprocess_* functions
subprocess.run(cmd, check=True)
def get_syslinux_version() -> int:
"""
This calls syslinux and asks for the version number.
:return: The major syslinux release number.
:raises subprocess.CalledProcessError: Error raised by ``subprocess.run`` in case syslinux does not return zero.
"""
# Example output: "syslinux 4.04 Copyright 1994-2011 H. Peter Anvin et al"
cmd = ["syslinux", "-v"]
completed_process = subprocess.run(
cmd,
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
encoding=sys.getdefaultencoding(),
)
output = completed_process.stdout.split()
return int(float(output[1]))
def find_file(
glob_path: pathlib.Path, file_regex: typing.Pattern[str]
) -> typing.Optional[pathlib.Path]:
"""
Given a path glob and a file regex, return a full path of the file.
:param: glob_path: Glob of a path, e.g. Path('/var/*/rhn')
:param: file_regex: A regex for a filename in the path
:return: The full file path or None if no file was found
"""
# Absolute paths are not supported BUT we can get around that: https://stackoverflow.com/a/51108375/4730773
parts = glob_path.parts
start_at = 1 if glob_path.is_absolute() else 0
bootloader_path_parts = pathlib.Path(*parts[start_at:])
results = sorted(pathlib.Path(glob_path.root).glob(str(bootloader_path_parts)))
# If no match, then report and bail out.
if len(results) <= 0:
logging.getLogger().info('Unable to find the "%s" folder.', glob_path)
return None
# Now scan the folders with the regex
target_shim = None
for possible_folder in results:
for child in possible_folder.iterdir():
if file_regex.search(str(child)):
target_shim = child.resolve()
break
# If no match is found report and return
return target_shim
| 13,727
|
Python
|
.py
| 322
| 32.164596
| 119
| 0.606552
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,188
|
replicate.py
|
cobbler_cobbler/cobbler/actions/replicate.py
|
"""
Replicate from a Cobbler master.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2007-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
# SPDX-FileCopyrightText: Scott Henson <shenson@redhat.com>
import fnmatch
import logging
import os
import xmlrpc.client
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from cobbler import utils
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
OBJ_TYPES = [
"distro",
"profile",
"system",
"repo",
"image",
]
class Replicate:
"""
This class contains the magic to replicate a Cobbler instance to another Cobbler instance.
"""
def __init__(self, api: "CobblerAPI") -> None:
"""
Constructor
:param api: The API which holds all information available in Cobbler.
"""
self.api = api
self.settings = api.settings()
self.remote: Optional[xmlrpc.client.ServerProxy] = None
self.uri: Optional[str] = None
self.logger = logging.getLogger()
self.master = ""
self.local_data: Dict[Any, Any] = {}
self.remote_data: Dict[Any, Any] = {}
self.remote_settings: Dict[str, Any] = {}
self.remote_names: Dict[Any, Any] = {}
self.remote_dict: Dict[str, Dict[str, Any]] = {}
self.must_include: Dict[str, Dict[str, Any]] = {
"distro": {},
"profile": {},
"system": {},
"image": {},
"repo": {},
}
self.port = ""
self.distro_patterns: List[str] = []
self.profile_patterns: List[str] = []
self.system_patterns: List[str] = []
self.repo_patterns: List[str] = []
self.image_patterns: List[str] = []
self.omit_data = False
self.prune = False
self.sync_all = False
self.use_ssl = False
self.local = None
def rsync_it(
self, from_path: str, to_path: str, object_type: Optional[str] = None
) -> None:
"""
Rsync from a source to a destination with the rsync options Cobbler was configured with.
:param from_path: The source to rsync from.
:param to_path: The destination to rsync to.
:param object_type: If set to "repo" this will take the repo rsync options instead of the global ones.
"""
from_path = f"{self.master}::{from_path}"
if object_type == "repo":
cmd = [
"rsync",
self.settings.replicate_repo_rsync_options,
from_path,
to_path,
]
else:
cmd = ["rsync", self.settings.replicate_rsync_options, from_path, to_path]
rsync_return_code = utils.subprocess_call(cmd, shell=False)
if rsync_return_code != 0:
self.logger.info("rsync failed")
# -------------------------------------------------------
def remove_objects_not_on_master(self, obj_type: str) -> None:
"""
Remove objects on this slave which are not on the master.
:param obj_type: The type of object which should be synchronized.
"""
local_objects = utils.lod_to_dod(self.local_data[obj_type], "uid")
remote_objects = utils.lod_to_dod(self.remote_data[obj_type], "uid")
for luid, ldata in local_objects.items():
if luid not in remote_objects:
try:
self.logger.info("removing %s %s", obj_type, ldata["name"])
self.api.remove_item(obj_type, ldata["name"], recursive=True)
except Exception:
utils.log_exc()
# -------------------------------------------------------
def add_objects_not_on_local(self, obj_type: str) -> None:
"""
Add objects locally which are not present on the slave but on the master.
:param obj_type:
"""
local_objects = utils.lod_to_dod(self.local_data[obj_type], "uid")
remote_objects = utils.lod_sort_by_key(self.remote_data[obj_type], "depth")
for rdata in remote_objects:
# do not add the system if it is not on the transfer list
if not rdata["name"] in self.must_include[obj_type]:
continue
if not rdata["uid"] in local_objects:
creator = getattr(self.api, f"new_{obj_type}")
newobj = creator(utils.revert_strip_none(rdata))
try:
self.logger.info("adding %s %s", obj_type, rdata["name"])
if not self.api.add_item(obj_type, newobj):
self.logger.error(
"failed to add %s %s", obj_type, rdata["name"]
)
except Exception:
utils.log_exc()
# -------------------------------------------------------
def replace_objects_newer_on_remote(self, obj_type: str) -> None:
"""
Replace objects which are newer on the local slave then on the remote slave
:param obj_type: The type of object to synchronize.
"""
local_objects = utils.lod_to_dod(self.local_data[obj_type], "uid")
remote_objects = utils.lod_to_dod(self.remote_data[obj_type], "uid")
for ruid, rdata in remote_objects.items():
# do not add the system if it is not on the transfer list
if rdata["name"] not in self.must_include[obj_type]:
continue
if ruid in local_objects:
ldata = local_objects[ruid]
if ldata["mtime"] < rdata["mtime"]:
if ldata["name"] != rdata["name"]:
self.logger.info("removing %s %s", obj_type, ldata["name"])
self.api.remove_item(obj_type, ldata["name"], recursive=True)
creator = getattr(self.api, f"new_{obj_type}")
newobj = creator(utils.revert_strip_none(rdata))
try:
self.logger.info("updating %s %s", obj_type, rdata["name"])
if not self.api.add_item(obj_type, newobj):
self.logger.error(
"failed to update %s %s", obj_type, rdata["name"]
)
except Exception:
utils.log_exc()
# -------------------------------------------------------
def replicate_data(self) -> None:
"""
Replicate the local and remote data to each another.
"""
if self.remote is None:
self.logger.warning("Remote server unavailable. No data replicated.")
return
if self.local is None:
self.logger.warning("Local server unavailable. No data replicated.")
return
remote_settings = self.remote.get_settings()
if not isinstance(remote_settings, dict):
raise TypeError("Remote server passed unexpected data for settings")
self.remote_settings = remote_settings
self.logger.info("Querying Both Servers")
for what in OBJ_TYPES:
self.remote_data[what] = self.remote.get_items(what)
self.local_data[what] = self.local.get_items(what)
self.generate_include_map()
if self.prune:
self.logger.info("Removing Objects Not Stored On Master")
obj_types = OBJ_TYPES[:]
if len(self.system_patterns) == 0 and "system" in obj_types:
obj_types.remove("system")
for what in obj_types:
self.remove_objects_not_on_master(what)
else:
self.logger.info("*NOT* Removing Objects Not Stored On Master")
if not self.omit_data:
self.logger.info("Rsyncing distros")
for distro in self.must_include["distro"]:
if self.must_include["distro"][distro] == 1:
self.logger.info("Rsyncing distro %s", distro)
target = self.remote.get_distro(distro)
if not isinstance(target, dict):
raise TypeError(
"Remote server passed unexpected data for distro"
)
target_webdir = os.path.join(
self.remote_settings["webdir"], "distro_mirror"
)
tail = filesystem_helpers.path_tail(target_webdir, target["kernel"])
if tail != "":
try:
# path_tail(a,b) returns something that looks like
# an absolute path, but it's really the sub-path
# from a that is contained in b. That means we want
# the first element of the path
dest = os.path.join(
self.settings.webdir,
"distro_mirror",
tail.split("/")[1],
)
self.rsync_it(f"distro-{target['name']}", dest)
except Exception:
self.logger.error("Failed to rsync distro %s", distro)
continue
else:
self.logger.warning(
"Skipping distro %s, as it doesn't appear to live under distro_mirror",
distro,
)
self.logger.info("Rsyncing repos")
for repo in self.must_include["repo"]:
if self.must_include["repo"][repo] == 1:
self.rsync_it(
f"repo-{repo}",
os.path.join(self.settings.webdir, "repo_mirror", repo),
"repo",
)
self.logger.info("Rsyncing distro repo configs")
self.rsync_it(
"cobbler-distros/config/",
os.path.join(self.settings.webdir, "distro_mirror", "config"),
)
self.logger.info("Rsyncing automatic installation templates & snippets")
self.rsync_it("cobbler-templates", self.settings.autoinstall_templates_dir)
self.rsync_it("cobbler-snippets", self.settings.autoinstall_snippets_dir)
self.logger.info("Rsyncing triggers")
self.rsync_it("cobbler-triggers", "/var/lib/cobbler/triggers")
self.logger.info("Rsyncing scripts")
self.rsync_it("cobbler-scripts", "/var/lib/cobbler/scripts")
else:
self.logger.info("*NOT* Rsyncing Data")
self.logger.info("Adding Objects Not Stored On Local")
for what in OBJ_TYPES:
self.add_objects_not_on_local(what)
self.logger.info("Updating Objects Newer On Remote")
for what in OBJ_TYPES:
self.replace_objects_newer_on_remote(what)
def link_distros(self) -> None:
"""
Link a distro from its location into the web directory to make it available for usage.
"""
for distro in self.api.distros():
self.logger.debug("Linking Distro %s", distro.name)
distro.link_distro()
def generate_include_map(self) -> None:
"""
Method that generates the information that is required to perform the replicate option.
"""
# This is the method that fills up "self.must_include"
# Load all remote objects and add them directly if "self.sync_all" is "True"
for object_type in OBJ_TYPES:
self.remote_names[object_type] = list(
utils.lod_to_dod(self.remote_data[object_type], "name").keys()
)
self.remote_dict[object_type] = utils.lod_to_dod(
self.remote_data[object_type], "name"
)
if self.sync_all:
for names in self.remote_dict[object_type]:
self.must_include[object_type][names] = 1
self.logger.debug("remote names struct is %s", self.remote_names)
if not self.sync_all:
# include all profiles that are matched by a pattern
for obj_type in OBJ_TYPES:
patvar = getattr(self, f"{obj_type}_patterns")
self.logger.debug("* Finding Explicit %s Matches", obj_type)
for pat in patvar:
for remote in self.remote_names[obj_type]:
self.logger.debug("?: seeing if %s looks like %s", remote, pat)
if fnmatch.fnmatch(remote, pat):
self.logger.debug(
"Adding %s for pattern match %s.", remote, pat
)
self.must_include[obj_type][remote] = 1
# include all profiles that systems require whether they are explicitly included or not
self.logger.debug("* Adding Profiles Required By Systems")
for sys in self.must_include["system"]:
pro = self.remote_dict["system"][sys].get("profile", "")
self.logger.debug("?: system %s requires profile %s.", sys, pro)
if pro != "":
self.logger.debug("Adding profile %s for system %s.", pro, sys)
self.must_include["profile"][pro] = 1
# include all profiles that subprofiles require whether they are explicitly included or not very deep
# nesting is possible
self.logger.debug("* Adding Profiles Required By SubProfiles")
while True:
loop_exit = True
for pro in self.must_include["profile"]:
parent = self.remote_dict["profile"][pro].get("parent", "")
if parent != "":
if parent not in self.must_include["profile"]:
self.logger.debug(
"Adding parent profile %s for profile %s.", parent, pro
)
self.must_include["profile"][parent] = 1
loop_exit = False
if loop_exit:
break
# require all distros that any profiles in the generated list requires whether they are explicitly included
# or not
self.logger.debug("* Adding Distros Required By Profiles")
for profile_for_distro in self.must_include["profile"]:
distro = self.remote_dict["profile"][profile_for_distro].get(
"distro", ""
)
if not distro == "<<inherit>>" and not distro == "~":
self.logger.debug(
"Adding distro %s for profile %s.", distro, profile_for_distro
)
self.must_include["distro"][distro] = 1
# require any repos that any profiles in the generated list requires whether they are explicitly included
# or not
self.logger.debug("* Adding Repos Required By Profiles")
for profile_for_repo in self.must_include["profile"]:
repos = self.remote_dict["profile"][profile_for_repo].get("repos", [])
if repos != "<<inherit>>":
for repo in repos:
self.logger.debug(
"Adding repo %s for profile %s.", repo, profile_for_repo
)
self.must_include["repo"][repo] = 1
# include all images that systems require whether they are explicitly included or not
self.logger.debug("* Adding Images Required By Systems")
for sys in self.must_include["system"]:
img = self.remote_dict["system"][sys].get("image", "")
self.logger.debug("?: system %s requires image %s.", sys, img)
if img != "":
self.logger.debug("Adding image %s for system %s.", img, sys)
self.must_include["image"][img] = 1
# -------------------------------------------------------
def run(
self,
cobbler_master: Optional[str] = None,
port: str = "80",
distro_patterns: Optional[str] = None,
profile_patterns: Optional[str] = None,
system_patterns: Optional[str] = None,
repo_patterns: Optional[str] = None,
image_patterns: Optional[str] = None,
prune: bool = False,
omit_data: bool = False,
sync_all: bool = False,
use_ssl: bool = False,
) -> None:
"""
Get remote profiles and distros and sync them locally
:param cobbler_master: The remote url of the master server.
:param port: The remote port of the master server.
:param distro_patterns: The pattern of distros to sync.
:param profile_patterns: The pattern of profiles to sync.
:param system_patterns: The pattern of systems to sync.
:param repo_patterns: The pattern of repositories to sync.
:param image_patterns: The pattern of images to sync.
:param prune: If the local server should be pruned before coping stuff.
:param omit_data: If the data behind images etc should be omitted or not.
:param sync_all: If everything should be synced (then the patterns are useless) or not.
:param use_ssl: If HTTPS or HTTP should be used.
"""
self.port = str(port)
if isinstance(distro_patterns, str):
self.distro_patterns = distro_patterns.split()
if isinstance(profile_patterns, str):
self.profile_patterns = profile_patterns.split()
if isinstance(system_patterns, str):
self.system_patterns = system_patterns.split()
if isinstance(repo_patterns, str):
self.repo_patterns = repo_patterns.split()
if isinstance(image_patterns, str):
self.image_patterns = image_patterns.split()
self.omit_data = omit_data
self.prune = prune
self.sync_all = sync_all
self.use_ssl = use_ssl
if self.use_ssl:
protocol = "https"
else:
protocol = "http"
if cobbler_master is not None:
self.master = cobbler_master
elif len(self.settings.cobbler_master) > 0:
self.master = self.settings.cobbler_master
else:
utils.die("No Cobbler master specified, try --master.")
self.uri = f"{protocol}://{self.master}:{self.port}/cobbler_api"
self.logger.info("cobbler_master = %s", cobbler_master)
self.logger.info("port = %s", self.port)
self.logger.info("distro_patterns = %s", self.distro_patterns)
self.logger.info("profile_patterns = %s", self.profile_patterns)
self.logger.info("system_patterns = %s", self.system_patterns)
self.logger.info("repo_patterns = %s", self.repo_patterns)
self.logger.info("image_patterns = %s", self.image_patterns)
self.logger.info("omit_data = %s", self.omit_data)
self.logger.info("sync_all = %s", self.sync_all)
self.logger.info("use_ssl = %s", self.use_ssl)
self.logger.info("XMLRPC endpoint: %s", self.uri)
self.logger.debug("test ALPHA")
self.remote = xmlrpc.client.Server(self.uri)
self.logger.debug("test BETA")
self.remote.ping()
self.local = xmlrpc.client.Server(
f"http://127.0.0.1:{self.settings.http_port}/cobbler_api"
)
self.local.ping()
self.replicate_data()
self.link_distros()
self.logger.info("Syncing")
self.api.sync()
self.logger.info("Done")
| 20,040
|
Python
|
.py
| 411
| 35.19708
| 119
| 0.543229
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,189
|
__init__.py
|
cobbler_cobbler/cobbler/actions/__init__.py
|
"""
The action module is responsible for containing one Python module for each action which Cobbler offers. The code should
never be dependent on another module or on other parts. An action should request the exact
data it requires and nothing more.
"""
| 254
|
Python
|
.py
| 5
| 49.8
| 119
| 0.811245
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,190
|
log.py
|
cobbler_cobbler/cobbler/actions/log.py
|
"""
Cobbler Trigger Module that managed the logs associated with a Cobbler system.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Bill Peck <bpeck@redhat.com>
import glob
import logging
import os
import os.path
import pathlib
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.items.system import System
class LogTool:
"""
Helpers for dealing with System logs, anamon, etc..
"""
def __init__(self, system: "System", api: "CobblerAPI"):
"""
Log library constructor requires a Cobbler system object.
"""
self.system = system
self.api = api
self.settings = api.settings()
self.logger = logging.getLogger()
def clear(self) -> None:
"""
Clears the system logs
"""
anamon_dir = pathlib.Path("/var/log/cobbler/anamon/").joinpath(self.system.name)
if anamon_dir.is_dir():
logs = list(
filter(os.path.isfile, glob.glob(str(anamon_dir.joinpath("*"))))
)
else:
logs = []
logging.info(
"No log-files found to delete for system: %s", self.system.name
)
for log in logs:
try:
with open(log, "w", encoding="UTF-8") as log_fd:
log_fd.truncate()
except IOError as error:
self.logger.info("Failed to Truncate '%s':%s ", log, error)
| 1,571
|
Python
|
.py
| 47
| 25.702128
| 88
| 0.606201
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,191
|
acl.py
|
cobbler_cobbler/cobbler/actions/acl.py
|
"""
Configures acls for various users/groups so they can access the Cobbler command
line as non-root. Now that CLI is largely remoted (XMLRPC) this is largely just
useful for not having to log in (access to shared-secret) file but also grants
access to hand-edit various cobbler_collections files and other useful things.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
from typing import TYPE_CHECKING, Optional
from cobbler import utils
from cobbler.cexceptions import CX
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
class AclConfig:
"""
TODO
"""
def __init__(self, api: "CobblerAPI") -> None:
"""
Constructor
:param api: The API which holds all information about Cobbler.
"""
self.api = api
self.settings = api.settings()
def run(
self,
adduser: Optional[str] = None,
addgroup: Optional[str] = None,
removeuser: Optional[str] = None,
removegroup: Optional[str] = None,
) -> None:
"""
Automate setfacl commands. Only one of the four may be specified but one option also must be specified.
:param adduser: Add a user to be able to manage Cobbler.
:param addgroup: Add a group to be able to manage Cobbler.
:param removeuser: Remove a user to be able to manage Cobbler.
:param removegroup: Remove a group to be able to manage Cobbler.
:raises CX: Raised in case not enough arguments are specified.
"""
args_ok = False
if adduser:
args_ok = True
self.modacl(True, True, adduser)
if addgroup:
args_ok = True
self.modacl(True, False, addgroup)
if removeuser:
args_ok = True
self.modacl(False, True, removeuser)
if removegroup:
args_ok = True
self.modacl(False, False, removegroup)
if not args_ok:
raise CX("no arguments specified, nothing to do")
def modacl(self, isadd: bool, isuser: bool, who: str) -> None:
"""
Modify the acls for Cobbler on the filesystem.
:param isadd: If true then the ``who`` will be added. If false then ``who`` will be removed.
:param isuser: If true then the ``who`` may be a user. If false then ``who`` may be a group.
:param who: The user or group to be added or removed.
"""
snipdir = self.settings.autoinstall_snippets_dir
tftpboot = self.settings.tftpboot_location
process_dirs = {
"/var/log/cobbler": "rwx",
"/var/log/cobbler/tasks": "rwx",
"/var/lib/cobbler": "rwx",
"/etc/cobbler": "rwx",
tftpboot: "rwx",
"/var/lib/cobbler/triggers": "rwx",
}
if not snipdir.startswith("/var/lib/cobbler/"):
process_dirs[snipdir] = "r"
for directory, how in process_dirs.items():
cmd = [
"setfacl",
"-d",
"-R",
"-m" if isadd else "-x",
f"u:{who}" if isuser else f"g:{who}",
directory,
]
if isadd:
cmd[4] = f"{cmd[4]}:{how}"
# We must pass in a copy of list because in case the call is async we
# would modify the call that maybe has not been done. We don't do this
# yet but let's be sure. Also, the tests would break if we don't pass a copy.
setfacl_reset_return_code = utils.subprocess_call(cmd.copy(), shell=False)
if setfacl_reset_return_code != 0:
utils.die(f'"setfacl" command failed for "{directory}"')
cmd.pop(1)
setfacl_return_code = utils.subprocess_call(cmd.copy(), shell=False)
if setfacl_return_code != 0:
utils.die(f'"setfacl" command failed for "{directory}"')
| 4,051
|
Python
|
.py
| 95
| 33.094737
| 111
| 0.599492
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,192
|
netboot.py
|
cobbler_cobbler/cobbler/actions/buildiso/netboot.py
|
"""
This module contains the specific code to generate a network bootable ISO.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
import pathlib
import re
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from cobbler import utils
from cobbler.actions import buildiso
from cobbler.actions.buildiso import BootFilesCopyset, LoaderCfgsParts
from cobbler.enums import Archs
from cobbler.utils import filesystem_helpers, input_converters
if TYPE_CHECKING:
from cobbler.items.distro import Distro
from cobbler.items.profile import Profile
from cobbler.items.system import System
class AppendLineBuilder:
"""
This class is meant to be initiated for a single append line. Afterwards the object should be disposed.
"""
def __init__(self, distro_name: str, data: Dict[str, Any]):
self.append_line = ""
self.data = data
self.distro_name = distro_name
self.dist: Optional["Distro"] = None
self.system_interface: Optional[str] = None
self.system_ip = None
self.system_netmask = None
self.system_gw = None
self.system_dns: Optional[Union[str, List[str]]] = None
def _system_int_append_line(self) -> None:
"""
This generates the interface configuration for the system to boot for the append line.
"""
if self.dist is None:
return
if self.system_interface is not None:
intmac = "mac_address_" + self.system_interface
if self.dist.breed == "suse":
if self.data.get(intmac, "") != "":
self.append_line += f" netdevice={self.data['mac_address_' + self.system_interface].lower()}"
else:
self.append_line += f" netdevice={self.system_interface}"
elif self.dist.breed == "redhat":
if self.data.get(intmac, "") != "":
self.append_line += (
f" ksdevice={self.data['mac_address_' + self.system_interface]}"
)
else:
self.append_line += f" ksdevice={self.system_interface}"
elif self.dist.breed in ["ubuntu", "debian"]:
self.append_line += f" netcfg/choose_interface={self.system_interface}"
def _system_ip_append_line(self) -> None:
"""
This generates the IP configuration for the system to boot for the append line.
"""
if self.dist is None:
return
if self.system_ip is not None:
if self.dist.breed == "suse":
self.append_line += f" hostip={self.system_ip}"
elif self.dist.breed == "redhat":
self.append_line += f" ip={self.system_ip}"
elif self.dist.breed in ["ubuntu", "debian"]:
self.append_line += f" netcfg/get_ipaddress={self.system_ip}"
def _system_mask_append_line(self) -> None:
"""
This generates the netmask configuration for the system to boot for the append line.
"""
if self.dist is None:
return
if self.system_netmask is not None:
if self.dist.breed in ["suse", "redhat"]:
self.append_line += f" netmask={self.system_netmask}"
elif self.dist.breed in ["ubuntu", "debian"]:
self.append_line += f" netcfg/get_netmask={self.system_netmask}"
def _system_gw_append_line(self) -> None:
"""
This generates the gateway configuration for the system to boot for the append line.
"""
if self.dist is None:
return
if self.system_gw is not None:
if self.dist.breed in ["suse", "redhat"]:
self.append_line += f" gateway={self.system_gw}"
elif self.dist.breed in ["ubuntu", "debian"]:
self.append_line += f" netcfg/get_gateway={self.system_gw}"
def _system_dns_append_line(self, exclude_dns: bool) -> None:
"""
This generates the DNS configuration for the system to boot for the append line.
:param exclude_dns: If this flag is set to True, the DNS configuration is skipped.
"""
if self.dist is None:
return
if not exclude_dns and self.system_dns is not None:
if self.dist.breed == "suse":
nameserver_key = "nameserver"
elif self.dist.breed == "redhat":
nameserver_key = "dns"
elif self.dist.breed in ["ubuntu", "debian"]:
nameserver_key = "netcfg/get_nameservers"
else:
return
if isinstance(self.system_dns, list):
joined_nameservers = ",".join(self.system_dns)
if joined_nameservers != "":
self.append_line += f" {nameserver_key}={joined_nameservers}"
else:
self.append_line += f" {nameserver_key}={self.system_dns}"
def _generate_static_ip_boot_interface(self) -> None:
"""
The interface to use when the system boots.
"""
if self.dist is None:
return
if self.dist.breed == "redhat":
if self.data["kernel_options"].get("ksdevice", "") != "":
self.system_interface = self.data["kernel_options"]["ksdevice"]
if self.system_interface == "bootif":
self.system_interface = None
del self.data["kernel_options"]["ksdevice"]
elif self.dist.breed == "suse":
if self.data["kernel_options"].get("netdevice", "") != "":
self.system_interface = self.data["kernel_options"]["netdevice"]
del self.data["kernel_options"]["netdevice"]
elif self.dist.breed in ["debian", "ubuntu"]:
if self.data["kernel_options"].get("netcfg/choose_interface", "") != "":
self.system_interface = self.data["kernel_options"][
"netcfg/choose_interface"
]
del self.data["kernel_options"]["netcfg/choose_interface"]
def _generate_static_ip_boot_ip(self) -> None:
"""
Generate the IP which is used during the installation process. This respects overrides.
"""
if self.dist is None:
return
if self.dist.breed == "redhat":
if self.data["kernel_options"].get("ip", "") != "":
self.system_ip = self.data["kernel_options"]["ip"]
del self.data["kernel_options"]["ip"]
elif self.dist.breed == "suse":
if self.data["kernel_options"].get("hostip", "") != "":
self.system_ip = self.data["kernel_options"]["hostip"]
del self.data["kernel_options"]["hostip"]
elif self.dist.breed in ["debian", "ubuntu"]:
if self.data["kernel_options"].get("netcfg/get_ipaddress", "") != "":
self.system_ip = self.data["kernel_options"]["netcfg/get_ipaddress"]
del self.data["kernel_options"]["netcfg/get_ipaddress"]
def _generate_static_ip_boot_mask(self) -> None:
"""
Generate the Netmask which is used during the installation process. This respects overrides.
"""
if self.dist is None:
return
if self.dist.breed in ["suse", "redhat"]:
if self.data["kernel_options"].get("netmask", "") != "":
self.system_netmask = self.data["kernel_options"]["netmask"]
del self.data["kernel_options"]["netmask"]
elif self.dist.breed in ["debian", "ubuntu"]:
if self.data["kernel_options"].get("netcfg/get_netmask", "") != "":
self.system_netmask = self.data["kernel_options"]["netcfg/get_netmask"]
del self.data["kernel_options"]["netcfg/get_netmask"]
def _generate_static_ip_boot_gateway(self) -> None:
"""
Generate the Gateway which is used during the installation process. This respects overrides.
"""
if self.dist is None:
return
if self.dist.breed in ["suse", "redhat"]:
if self.data["kernel_options"].get("gateway", "") != "":
self.system_gw = self.data["kernel_options"]["gateway"]
del self.data["kernel_options"]["gateway"]
elif self.dist.breed in ["debian", "ubuntu"]:
if self.data["kernel_options"].get("netcfg/get_gateway", "") != "":
self.system_gw = self.data["kernel_options"]["netcfg/get_gateway"]
del self.data["kernel_options"]["netcfg/get_gateway"]
def _generate_static_ip_boot_dns(self) -> None:
"""
Generates the static Boot DNS Server which is used for resolving Domains.
"""
if self.dist is None:
return
if self.dist.breed == "redhat":
if self.data["kernel_options"].get("dns", "") != "":
self.system_dns = self.data["kernel_options"]["dns"]
del self.data["kernel_options"]["dns"]
elif self.dist.breed == "suse":
if self.data["kernel_options"].get("nameserver", "") != "":
self.system_dns = self.data["kernel_options"]["nameserver"]
del self.data["kernel_options"]["nameserver"]
elif self.dist.breed in ["debian", "ubuntu"]:
if self.data["kernel_options"].get("netcfg/get_nameservers", "") != "":
self.system_dns = self.data["kernel_options"]["netcfg/get_nameservers"]
del self.data["kernel_options"]["netcfg/get_nameservers"]
def _generate_static_ip_boot_options(self) -> None:
"""
Try to add static ip boot options to avoid DHCP (interface/ip/netmask/gw/dns)
Check for overrides first and clear them from kernel_options
"""
self._generate_static_ip_boot_interface()
self._generate_static_ip_boot_ip()
self._generate_static_ip_boot_mask()
self._generate_static_ip_boot_gateway()
self._generate_static_ip_boot_dns()
def _generate_append_redhat(self) -> None:
"""
Generate additional content for the append line in case that dist is a RedHat based one.
"""
if self.data.get("proxy", "") != "":
self.append_line += (
f" proxy={self.data['proxy']} http_proxy={self.data['proxy']}"
)
if self.dist and self.dist.os_version in [
"rhel4",
"rhel5",
"rhel6",
"fedora16",
]:
self.append_line += f" ks={self.data['autoinstall']}"
if self.data["autoinstall_meta"].get("tree"):
self.append_line += f" repo={self.data['autoinstall_meta']['tree']}"
else:
self.append_line += f" inst.ks={self.data['autoinstall']}"
if self.data["autoinstall_meta"].get("tree"):
self.append_line += (
f" inst.repo={self.data['autoinstall_meta']['tree']}"
)
def _generate_append_debian(self, system: "System") -> None:
"""
Generate additional content for the append line in case that dist is Ubuntu or Debian.
:param system: The system which the append line should be generated for.
"""
if self.dist is None:
return
self.append_line += f" auto-install/enable=true url={self.data['autoinstall']} netcfg/disable_autoconfig=true"
if self.data.get("proxy", "") != "":
self.append_line += f" mirror/http/proxy={self.data['proxy']}"
# hostname is required as a parameter, the one in the preseed is not respected
my_domain = "local.lan"
if system.hostname != "":
# if this is a FQDN, grab the first bit
my_hostname = system.hostname.split(".")[0]
_domain = system.hostname.split(".")[1:]
if _domain:
my_domain = ".".join(_domain)
else:
my_hostname = system.name.split(".")[0]
_domain = system.name.split(".")[1:]
if _domain:
my_domain = ".".join(_domain)
# At least for debian deployments configured for DHCP networking this values are not used, but
# specifying here avoids questions
self.append_line += f" hostname={my_hostname} domain={my_domain}"
# A similar issue exists with suite name, as installer requires the existence of "stable" in the dists
# directory
self.append_line += f" suite={self.dist.os_version}"
def _generate_append_suse(self, scheme: str = "http") -> None:
"""
Special adjustments for generating the append line for suse.
:param scheme: This can be either ``http`` or ``https``.
:return: The updated append line. If the distribution is not SUSE, then nothing is changed.
"""
if self.dist is None:
return
if self.data.get("proxy", "") != "":
self.append_line += f" proxy={self.data['proxy']}"
if self.data["kernel_options"].get("install", "") != "":
self.append_line += f" install={self.data['kernel_options']['install']}"
del self.data["kernel_options"]["install"]
else:
self.append_line += (
f" install={scheme}://{self.data['server']}:{self.data['http_port']}/cblr/"
f"links/{self.dist.name}"
)
if self.data["kernel_options"].get("autoyast", "") != "":
self.append_line += f" autoyast={self.data['kernel_options']['autoyast']}"
del self.data["kernel_options"]["autoyast"]
else:
self.append_line += f" autoyast={self.data['autoinstall']}"
def _adjust_interface_config(self) -> None:
"""
If no kernel_options overrides are present find the management interface do nothing when zero or multiple
management interfaces are found.
"""
if self.system_interface is None:
mgmt_ints: List[str] = []
mgmt_ints_multi: List[str] = []
slave_ints: List[str] = []
for iname, idata in self.data["interfaces"].items():
if idata["management"] and idata["interface_type"] in [
"bond",
"bridge",
]:
# bonded/bridged management interface
mgmt_ints_multi.append(iname)
if idata["management"] and idata["interface_type"] not in [
"bond",
"bridge",
"bond_slave",
"bridge_slave",
"bonded_bridge_slave",
]:
# single management interface
mgmt_ints.append(iname)
if len(mgmt_ints_multi) == 1 and len(mgmt_ints) == 0:
# Bonded/bridged management interface, find a slave interface if eth0 is a slave use that (it's what
# people expect)
for iname, idata in self.data["interfaces"].items():
if (
idata["interface_type"]
in ["bond_slave", "bridge_slave", "bonded_bridge_slave"]
and idata["interface_master"] == mgmt_ints_multi[0]
):
slave_ints.append(iname)
if "eth0" in slave_ints:
self.system_interface = "eth0"
else:
self.system_interface = slave_ints[0]
# Set self.system_ip from the bonded/bridged interface here
self.system_ip = self.data[
"ip_address_"
+ self.data["interface_master_" + self.system_interface]
]
self.system_netmask = self.data[
"netmask_" + self.data["interface_master_" + self.system_interface]
]
if len(mgmt_ints) == 1 and len(mgmt_ints_multi) == 0:
# Single management interface
self.system_interface = mgmt_ints[0]
def _get_tcp_ip_config(self) -> None:
"""
Lookup tcp/ip configuration data. If not present already present this adds it from the previously blenderd data.
"""
if self.system_ip is None and self.system_interface is not None:
intip = "ip_address_" + self.system_interface
if self.data.get(intip, "") != "":
self.system_ip = self.data["ip_address_" + self.system_interface]
if self.system_netmask is None and self.system_interface is not None:
intmask = "netmask_" + self.system_interface
if self.data.get(intmask, "") != "":
self.system_netmask = self.data["netmask_" + self.system_interface]
if self.system_gw is None:
if self.data.get("gateway", "") != "":
self.system_gw = self.data["gateway"]
if self.system_dns is None:
if self.data.get("name_servers") != "":
self.system_dns = self.data["name_servers"]
def generate_system(
self, dist: "Distro", system: "System", exclude_dns: bool, scheme: str = "http"
) -> str:
"""
Generate the append-line for a net-booting system.
:param dist: The distribution associated with the system.
:param system: The system itself
:param exclude_dns: Whether to include the DNS config or not.
:param scheme: The scheme that is used to read the autoyast file from the server
"""
self.dist = dist
self.append_line = f" APPEND initrd=/{self.distro_name}.img"
if self.dist.breed == "suse":
self._generate_append_suse(scheme=scheme)
elif self.dist.breed == "redhat":
self._generate_append_redhat()
elif dist.breed in ["ubuntu", "debian"]:
self._generate_append_debian(system)
self._generate_static_ip_boot_options()
self._adjust_interface_config()
self._get_tcp_ip_config()
# Add information to the append_line
self._system_int_append_line()
self._system_ip_append_line()
self._system_mask_append_line()
self._system_gw_append_line()
self._system_dns_append_line(exclude_dns)
# Add remaining kernel_options to append_line
self.append_line += buildiso.add_remaining_kopts(self.data["kernel_options"])
return self.append_line
def generate_profile(
self, distro_breed: str, os_version: str, protocol: str = "http"
) -> str:
"""
Generate the append line for the kernel for a network installation.
:param distro_breed: The name of the distribution breed.
:param os_version: The OS version of the distribution.
:param protocol: The scheme that is used to read the autoyast file from the server
:return: The generated append line.
"""
self.append_line = f" append initrd=/{self.distro_name}.img"
if distro_breed == "suse":
if self.data.get("proxy", "") != "":
self.append_line += f" proxy={self.data['proxy']}"
if self.data["kernel_options"].get("install", "") != "":
install_options: Union[str, List[str]] = self.data["kernel_options"][
"install"
]
if isinstance(install_options, list):
install_options = install_options[0]
self.append_line += f" install={install_options}"
del self.data["kernel_options"]["install"]
else:
self.append_line += (
f" install={protocol}://{self.data['server']}:{self.data['http_port']}/cblr/"
f"links/{self.distro_name}"
)
if self.data["kernel_options"].get("autoyast", "") != "":
self.append_line += (
f" autoyast={self.data['kernel_options']['autoyast']}"
)
del self.data["kernel_options"]["autoyast"]
else:
self.append_line += f" autoyast={self.data['autoinstall']}"
elif distro_breed == "redhat":
if self.data.get("proxy", "") != "":
self.append_line += (
f" proxy={self.data['proxy']} http_proxy={self.data['proxy']}"
)
if os_version in ["rhel4", "rhel5", "rhel6", "fedora16"]:
self.append_line += f" ks={self.data['autoinstall']}"
if self.data["autoinstall_meta"].get("tree"):
self.append_line += f" repo={self.data['autoinstall_meta']['tree']}"
else:
self.append_line += f" inst.ks={self.data['autoinstall']}"
if self.data["autoinstall_meta"].get("tree"):
self.append_line += (
f" inst.repo={self.data['autoinstall_meta']['tree']}"
)
elif distro_breed in ["ubuntu", "debian"]:
self.append_line += (
f" auto-install/enable=true url={self.data['autoinstall']}"
)
if self.data.get("proxy", "") != "":
self.append_line += f" mirror/http/proxy={self.data['proxy']}"
self.append_line += buildiso.add_remaining_kopts(self.data["kernel_options"])
return self.append_line
class NetbootBuildiso(buildiso.BuildIso):
"""
This class contains all functionality related to building network installation images.
"""
def filter_systems(self, selected_items: Optional[List[str]] = None) -> List[Any]:
"""
Return a list of valid system objects selected from all systems by name, or everything if ``selected_items`` is
empty.
:param selected_items: A list of names to include in the returned list.
:return: A list of valid systems. If an error occurred this is logged and an empty list is returned.
"""
if selected_items is None:
selected_items = []
found_systems = self.filter_items(self.api.systems(), selected_items)
# Now filter all systems out that are image based as we don't know about their kernel and initrds
return_systems: List["System"] = []
for system in found_systems:
# All systems not underneath a profile should be skipped
parent_obj = system.get_conceptual_parent()
if (
parent_obj is not None
and parent_obj.TYPE_NAME == "profile" # type: ignore[reportUnnecessaryComparison]
):
return_systems.append(system)
# Now finally return
return return_systems
def make_shorter(self, distname: str) -> str:
"""
Return a short distro identifier which is basically an internal counter which is mapped via the real distro
name.
:param distname: The distro name to return an identifier for.
:return: A short distro identifier
"""
if distname in self.distmap:
return self.distmap[distname]
self.distctr += 1
self.distmap[distname] = str(self.distctr)
return str(self.distctr)
def _generate_boot_loader_configs(
self, profile_names: List[str], system_names: List[str], exclude_dns: bool
) -> LoaderCfgsParts:
"""Generate boot loader configuration.
The configuration is placed as parts into a list. The elements expect to
be joined by newlines for writing.
:param profile_names: Profile filter, can be an empty list for "all profiles".
:param system_names: System filter, can be an empty list for "all systems".
:param exclude_dns: Used for system kernel cmdline.
"""
loader_config_parts = LoaderCfgsParts([self.iso_template], [], [])
loader_config_parts.isolinux.append("MENU SEPARATOR")
self._generate_profiles_loader_configs(profile_names, loader_config_parts)
self._generate_systems_loader_configs(
system_names, exclude_dns, loader_config_parts
)
return loader_config_parts
def _generate_profiles_loader_configs(
self, profiles: List[str], loader_cfg_parts: LoaderCfgsParts
) -> None:
"""Generate isolinux configuration for profiles.
The passed in isolinux_cfg_parts list is changed in-place.
:param profiles: Profile filter, can be empty for "all profiles".
:param isolinux_cfg_parts: Output parameter for isolinux configuration.
:param bootfiles_copyset: Output parameter for bootfiles copyset.
"""
for profile in self.filter_profiles(profiles):
isolinux, grub, to_copy = self._generate_profile_config(profile)
loader_cfg_parts.isolinux.append(isolinux)
loader_cfg_parts.grub.append(grub)
loader_cfg_parts.bootfiles_copysets.append(to_copy)
def _generate_profile_config(
self, profile: "Profile"
) -> Tuple[str, str, BootFilesCopyset]:
"""Generate isolinux configuration for a single profile.
:param profile: Profile object to generate the configuration for.
"""
distro: Optional["Distro"] = profile.get_conceptual_parent() # type: ignore[reportGeneralTypeIssues]
if distro is None:
raise ValueError("Distro of a Profile must not be None!")
distroname = self.make_shorter(distro.name)
data = utils.blender(self.api, False, profile)
# SUSE uses 'textmode' instead of 'text'
utils.kopts_overwrite(
data["kernel_options"], self.api.settings().server, distro.breed
)
if not re.match(r"[a-z]+://.*", data["autoinstall"]):
autoinstall_scheme = self.api.settings().autoinstall_scheme
data["autoinstall"] = (
f"{autoinstall_scheme}://{data['server']}:{data['http_port']}/cblr/svc/op/autoinstall/"
f"profile/{profile.name}"
)
append_line = AppendLineBuilder(
distro_name=distroname, data=data
).generate_profile(distro.breed, distro.os_version)
kernel_path = f"/{distroname}.krn"
initrd_path = f"/{distroname}.img"
isolinux_cfg = self._render_isolinux_entry(
append_line, menu_name=distro.name, kernel_path=kernel_path
)
grub_cfg = self._render_grub_entry(
append_line,
menu_name=distro.name,
kernel_path=kernel_path,
initrd_path=initrd_path,
)
return (
isolinux_cfg,
grub_cfg,
BootFilesCopyset(distro.kernel, distro.initrd, distroname),
)
def _generate_systems_loader_configs(
self,
system_names: List[str],
exclude_dns: bool,
loader_cfg_parts: LoaderCfgsParts,
) -> None:
"""Generate isolinux configuration for systems.
The passed in isolinux_cfg_parts list is changed in-place.
:param systems: System filter, can be empty for "all profiles".
:param isolinux_cfg_parts: Output parameter for isolinux configuration.
:param bootfiles_copyset: Output parameter for bootfiles copyset.
"""
for system in self.filter_systems(system_names):
isolinux, grub, to_copy = self._generate_system_config(
system, exclude_dns=exclude_dns
)
loader_cfg_parts.isolinux.append(isolinux)
loader_cfg_parts.grub.append(grub)
loader_cfg_parts.bootfiles_copysets.append(to_copy)
def _generate_system_config(
self, system: "System", exclude_dns: bool
) -> Tuple[str, str, BootFilesCopyset]:
"""Generate isolinux configuration for a single system.
:param system: System object to generate the configuration for.
:exclude_dns: Control if DNS configuration is part of the kernel cmdline.
"""
profile = system.get_conceptual_parent()
# FIXME: pass distro, it's known from CLI
distro: Optional["Distro"] = profile.get_conceptual_parent() # type: ignore
if distro is None:
raise ValueError("Distro of Profile may never be None!")
distroname = self.make_shorter(distro.name) # type: ignore
data = utils.blender(self.api, False, system)
autoinstall_scheme = self.api.settings().autoinstall_scheme
if not re.match(r"[a-z]+://.*", data["autoinstall"]):
data["autoinstall"] = (
f"{autoinstall_scheme}://{data['server']}:{data['http_port']}/cblr/svc/op/autoinstall/"
f"system/{system.name}"
)
append_line = AppendLineBuilder(
distro_name=distroname, data=data
).generate_system(
distro, system, exclude_dns # type: ignore
)
kernel_path = f"/{distroname}.krn"
initrd_path = f"/{distroname}.img"
isolinux_cfg = self._render_isolinux_entry(
append_line, menu_name=system.name, kernel_path=kernel_path
)
grub_cfg = self._render_grub_entry(
append_line,
menu_name=distro.name, # type: ignore
kernel_path=kernel_path,
initrd_path=initrd_path,
)
return (
isolinux_cfg,
grub_cfg,
BootFilesCopyset(distro.kernel, distro.initrd, distroname), # type: ignore
)
def _copy_esp(self, esp_source: str, buildisodir: str):
"""Copy existing EFI System Partition into the buildisodir."""
filesystem_helpers.copyfile(esp_source, buildisodir + "/efi")
def run(
self,
iso: str = "autoinst.iso",
buildisodir: str = "",
profiles: Optional[List[str]] = None,
xorrisofs_opts: str = "",
distro_name: str = "",
systems: Optional[List[str]] = None,
exclude_dns: bool = False,
**kwargs: Any,
):
"""
Generate a net-installer for a distribution.
By default, the ISO includes all available systems and profiles. Specify
``profiles`` and ``systems`` to only include the selected systems and
profiles. Both parameters can be provided at the same time.
:param iso: The name of the iso. Defaults to "autoinst.iso".
:param buildisodir: This overwrites the directory from the settings in which the iso is built in.
:param profiles: The filter to generate the ISO only for selected profiles.
:param xorrisofs_opts: ``xorrisofs`` options to include additionally.
:param distro_name: For detecting the architecture of the ISO.
:param systems: Don't use that when building standalone ISOs. The filter to generate the ISO only for selected
systems.
:param exclude_dns: Whether the repositories have to be locally available or the internet is reachable.
"""
del kwargs # just accepted for polymorphism
distro_obj = self.parse_distro(distro_name)
if distro_obj.arch not in (
Archs.X86_64,
Archs.PPC,
Archs.PPC64,
Archs.PPC64LE,
Archs.PPC64EL,
):
raise ValueError(
"cobbler buildiso does not work for arch={distro_obj.arch}"
)
system_names = input_converters.input_string_or_list_no_inherit(systems)
profile_names = input_converters.input_string_or_list_no_inherit(profiles)
loader_config_parts = self._generate_boot_loader_configs(
profile_names, system_names, exclude_dns
)
buildisodir = self._prepare_buildisodir(buildisodir)
buildiso_dirs = None
distro_mirrordir = pathlib.Path(self.api.settings().webdir) / "distro_mirror"
xorriso_func = None
esp_location = ""
if distro_obj.arch == Archs.X86_64:
xorriso_func = self._xorriso_x86_64
buildiso_dirs = self.create_buildiso_dirs_x86_64(buildisodir)
# fill temporary directory with arch-specific binaries
self._copy_isolinux_files()
try:
filesource = self._find_distro_source(
distro_obj.kernel, str(distro_mirrordir)
)
self.logger.info("filesource=%s", filesource)
distro_esp = self._find_esp(pathlib.Path(filesource))
self.logger.info("esp=%s", distro_esp)
except ValueError:
distro_esp = None
if distro_esp is not None:
self._copy_esp(distro_esp, buildisodir)
else:
esp_location = self._create_esp_image_file(buildisodir)
self._copy_grub_into_esp(esp_location, distro_obj.arch)
self._write_grub_cfg(loader_config_parts.grub, buildiso_dirs.grub)
self._write_isolinux_cfg(
loader_config_parts.isolinux, buildiso_dirs.isolinux
)
elif distro_obj.arch in (Archs.PPC, Archs.PPC64, Archs.PPC64LE, Archs.PPC64EL):
xorriso_func = self._xorriso_ppc64le
buildiso_dirs = self.create_buildiso_dirs_ppc64le(buildisodir)
grub_bin = (
pathlib.Path(self.api.settings().bootloaders_dir)
/ "grub"
/ "grub.ppc64le"
)
bootinfo_txt = self._render_bootinfo_txt(distro_name)
# fill temporary directory with arch-specific binaries
filesystem_helpers.copyfile(
str(grub_bin), str(buildiso_dirs.grub / "grub.elf")
)
self._write_grub_cfg(loader_config_parts.grub, buildiso_dirs.grub)
self._write_bootinfo(bootinfo_txt, buildiso_dirs.ppc)
else:
raise ValueError(
"cobbler buildiso does not work for arch={distro_obj.arch}"
)
for copyset in loader_config_parts.bootfiles_copysets:
self._copy_boot_files(
copyset.src_kernel,
copyset.src_initrd,
str(buildiso_dirs.root),
copyset.new_filename,
)
xorriso_func(xorrisofs_opts, iso, buildisodir, esp_location)
| 34,402
|
Python
|
.py
| 704
| 37.03267
| 120
| 0.581953
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,193
|
__init__.py
|
cobbler_cobbler/cobbler/actions/buildiso/__init__.py
|
"""
Builds bootable CD images that have PXE-equivalent behavior for all Cobbler distros/profiles/systems currently in
memory.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
import logging
import os
import pathlib
import re
import shutil
from typing import TYPE_CHECKING, Dict, List, NamedTuple, Optional, Union
from cobbler import templar, utils
from cobbler.enums import Archs
from cobbler.utils import filesystem_helpers, input_converters
if TYPE_CHECKING:
from cobbler.api import CobblerAPI
from cobbler.cobbler_collections.collection import ITEM, Collection
from cobbler.items.distro import Distro
from cobbler.items.profile import Profile
def add_remaining_kopts(kopts: Dict[str, Union[str, List[str]]]) -> str:
"""Add remaining kernel_options to append_line
:param kopts: The kernel options which are not present in append_line.
:return: A single line with all kernel options from the dictionary in the string. Starts with a space.
"""
append_line = [""] # empty str to ensure the returned str starts with a space
for option, args in kopts.items():
if args is None: # type: ignore
append_line.append(f"{option}")
continue
if not isinstance(args, list):
args = [args]
for arg in args:
arg_str = format(arg)
if " " in arg_str:
arg_str = f'"{arg_str}"'
append_line.append(f"{option}={arg_str}")
return " ".join(append_line)
class BootFilesCopyset(NamedTuple): # pylint: disable=missing-class-docstring
src_kernel: str
src_initrd: str
new_filename: str
class LoaderCfgsParts(NamedTuple): # pylint: disable=missing-class-docstring
isolinux: List[str]
grub: List[str]
bootfiles_copysets: List[BootFilesCopyset]
class BuildisoDirsX86_64(
NamedTuple
): # noqa: N801 pylint: disable=invalid-name,missing-class-docstring
root: pathlib.Path
isolinux: pathlib.Path
grub: pathlib.Path
autoinstall: pathlib.Path
repo: pathlib.Path
class BuildisoDirsPPC64LE(NamedTuple): # pylint: disable=missing-class-docstring
root: pathlib.Path
grub: pathlib.Path
ppc: pathlib.Path
autoinstall: pathlib.Path
repo: pathlib.Path
class Autoinstall(NamedTuple): # pylint: disable=missing-class-docstring
config: str
repos: List[str]
class BuildIso:
"""
Handles conversion of internal state to the isolinux tree layout
"""
def __init__(self, api: "CobblerAPI") -> None:
"""Constructor which initializes things here. The collection manager pulls all other dependencies in.
:param api: The API instance which holds all information about objects in Cobbler.
"""
self.api = api
self.distmap: Dict[str, str] = {}
self.distctr = 0
self.logger = logging.getLogger()
self.templar = templar.Templar(api)
self.isolinuxdir = ""
# based on https://uefi.org/sites/default/files/resources/UEFI%20Spec%202.8B%20May%202020.pdf
self.efi_fallback_renames = {
"grubaa64": "bootaa64.efi",
"grubx64.efi": "bootx64.efi",
}
# grab the header from buildiso.header file
self.iso_template = (
pathlib.Path(self.api.settings().iso_template_dir)
.joinpath("buildiso.template")
.read_text(encoding="UTF-8")
)
self.isolinux_menuentry_template = (
pathlib.Path(api.settings().iso_template_dir)
.joinpath("isolinux_menuentry.template")
.read_text(encoding="UTF-8")
)
self.grub_menuentry_template = (
pathlib.Path(api.settings().iso_template_dir)
.joinpath("grub_menuentry.template")
.read_text(encoding="UTF-8")
)
self.bootinfo_template = (
pathlib.Path(api.settings().iso_template_dir)
.joinpath("bootinfo.template")
.read_text(encoding="UTF-8")
)
def _find_distro_source(self, known_file: str, distro_mirror: str) -> str:
"""
Find a distro source tree based on a known file.
:param known_file: Path to a file that's known to be part of the distribution,
commonly the path to the kernel.
:raises ValueError: When no installation source was not found.
:return: Root of the distribution's source tree.
"""
self.logger.debug("Trying to locate source.")
(source_head, source_tail) = os.path.split(known_file)
filesource = None
while source_tail != "":
if source_head == distro_mirror:
filesource = os.path.join(source_head, source_tail)
self.logger.debug("Found source in %s", filesource)
break
(source_head, source_tail) = os.path.split(source_head)
if filesource:
return filesource
else:
raise ValueError(
"No installation source found. When building a standalone (incl. airgapped) ISO"
" you must specify a --source if the distro install tree is not hosted locally"
)
def _copy_boot_files(
self, kernel_path: str, initrd_path: str, destdir: str, new_filename: str = ""
):
"""Copy kernel/initrd to destdir with (optional) newfile prefix
:param kernel_path: Path to a a distro's kernel.
:param initrd_path: Path to a a distro's initrd.
:param destdir: The destination directory.
:param new_filename: The file new filename. Kernel and Initrd have different extensions to seperate them from
each another.
"""
kernel_source = pathlib.Path(kernel_path)
initrd_source = pathlib.Path(initrd_path)
path_destdir = pathlib.Path(destdir)
if new_filename:
kernel_dest = str(path_destdir / f"{new_filename}.krn")
initrd_dest = str(path_destdir / f"{new_filename}.img")
else:
kernel_dest = str(path_destdir / kernel_source.name)
initrd_dest = str(path_destdir / initrd_source.name)
filesystem_helpers.copyfile(str(kernel_source), kernel_dest)
filesystem_helpers.copyfile(str(initrd_source), initrd_dest)
def filter_profiles(
self, selected_items: Optional[List[str]] = None
) -> List["Profile"]:
"""
Return a list of valid profile objects selected from all profiles by name, or everything if ``selected_items``
is empty.
:param selected_items: A list of names to include in the returned list.
:return: A list of valid profiles. If an error occurred this is logged and an empty list is returned.
"""
if selected_items is None:
selected_items = []
return self.filter_items(self.api.profiles(), selected_items)
def filter_items(
self, all_objs: "Collection[ITEM]", selected_items: List[str]
) -> List["ITEM"]:
"""Return a list of valid profile or system objects selected from all profiles or systems by name, or everything
if selected_items is empty.
:param all_objs: The collection of items to filter.
:param selected_items: The list of names
:raises ValueError: Second option that this error is raised
when the list of filtered systems or profiles is empty.
:return: A list of valid profiles OR systems. If an error occurred this is logged and an empty list is returned.
"""
# No profiles/systems selection is made, let's return everything.
if len(selected_items) == 0:
return list(all_objs)
filtered_objects: List["ITEM"] = []
for name in selected_items:
item_object = all_objs.find(name=name)
if item_object is not None and not isinstance(item_object, list):
filtered_objects.append(item_object)
selected_items.remove(name)
for bad_name in selected_items:
self.logger.warning('"%s" is not a valid profile or system', bad_name)
if len(filtered_objects) == 0:
raise ValueError("No valid systems or profiles were specified.")
return filtered_objects
def parse_distro(self, distro_name: str) -> "Distro":
"""
Find and return distro object.
:param distro_name: Name of the distribution to parse.
:raises ValueError: If the distro is not found.
"""
distro_obj = self.api.find_distro(name=distro_name)
if distro_obj is None or isinstance(distro_obj, list):
raise ValueError(f"Distribution {distro_name} not found or ambigous.")
return distro_obj
def parse_profiles(
self, profiles: Optional[List[str]], distro_obj: "Distro"
) -> List["Profile"]:
"""
TODO
:param profiles: TODO
:param distro_obj: TODO
"""
profile_names = input_converters.input_string_or_list_no_inherit(profiles)
if profile_names:
orphans = set(profile_names) - set(distro_obj.children)
if len(orphans) > 0:
raise ValueError(
"When building a standalone ISO, all --profiles must be"
" under --distro. Extra --profiles: {}".format(
",".join(sorted(str(o for o in orphans)))
)
)
return self.filter_profiles(profile_names)
else:
return self.filter_profiles(distro_obj.children) # type: ignore[reportGeneralTypeIssues]
def _copy_isolinux_files(self):
"""
This method copies the required and optional files from syslinux into the directories we use for building the
ISO.
:param iso_distro: The distro (and thus architecture) to build the ISO for.
:param buildisodir: The directory where the ISO is being built in.
"""
self.logger.info("copying syslinux files")
files_to_copy = [
"isolinux.bin",
"menu.c32",
"chain.c32",
"ldlinux.c32",
"libcom32.c32",
"libutil.c32",
]
optional_files = ["ldlinux.c32", "libcom32.c32", "libutil.c32"]
syslinux_folders = [
pathlib.Path(self.api.settings().syslinux_dir),
pathlib.Path(self.api.settings().syslinux_dir).joinpath("modules/bios/"),
pathlib.Path("/usr/lib/syslinux/"),
pathlib.Path("/usr/lib/ISOLINUX/"),
]
# file_copy_success will be used to check for missing files
file_copy_success: Dict[str, bool] = {
f: False for f in files_to_copy if f not in optional_files
}
for syslinux_folder in syslinux_folders:
if syslinux_folder.exists():
for file_to_copy in files_to_copy:
source_file = syslinux_folder.joinpath(file_to_copy)
if source_file.exists():
filesystem_helpers.copyfile(
str(source_file),
os.path.join(self.isolinuxdir, file_to_copy),
)
file_copy_success[file_to_copy] = True
unsuccessful_copied_files = [k for k, v in file_copy_success.items() if not v]
if len(unsuccessful_copied_files) > 0:
self.logger.error(
'The following files were not found: "%s"',
'", "'.join(unsuccessful_copied_files),
)
raise FileNotFoundError(
"Required file(s) not found. Please check your syslinux installation"
)
def _render_grub_entry(
self, append_line: str, menu_name: str, kernel_path: str, initrd_path: str
) -> str:
"""
TODO
:param append_line: TODO
:param menu_name: TODO
:param kernel_path: TODO
:param initrd_path: TODO
"""
return self.templar.render(
self.grub_menuentry_template,
out_path=None,
search_table={
"menu_name": menu_name,
"kernel_path": kernel_path,
"initrd_path": initrd_path,
"kernel_options": re.sub(r".*initrd=\S+", "", append_line),
},
)
def _render_isolinux_entry(
self, append_line: str, menu_name: str, kernel_path: str, menu_indent: int = 0
) -> str:
"""Render a single isolinux.cfg menu entry."""
return self.templar.render(
self.isolinux_menuentry_template,
out_path=None,
search_table={
"menu_name": menu_name,
"kernel_path": kernel_path,
"append_line": append_line.lstrip(),
"menu_indent": menu_indent,
},
template_type="jinja2",
)
def _render_bootinfo_txt(self, distro_name: str) -> str:
"""Render bootinfo.txt for ppc."""
return self.templar.render(
self.bootinfo_template,
out_path=None,
search_table={"distro_name": distro_name},
template_type="jinja2",
)
def _copy_grub_into_esp(self, esp_image_location: str, arch: Archs):
"""Copy grub boot loader into EFI System Partition.
:param esp_image_location: Path to EFI System Partition.
:param arch: Distribution architecture
"""
grub_name = self.calculate_grub_name(arch)
efi_name = self.efi_fallback_renames.get(grub_name, grub_name)
esp_efi_boot = self._create_efi_boot_dir(esp_image_location)
grub_binary = (
pathlib.Path(self.api.settings().bootloaders_dir) / "grub" / grub_name
)
filesystem_helpers.copyfileimage(
str(grub_binary), esp_image_location, f"{esp_efi_boot}/{efi_name}"
)
def calculate_grub_name(self, desired_arch: Archs) -> str:
"""
This function checks the bootloaders_formats in our settings and then checks if there is a match between the
architectures and the distribution architecture.
:param distro: The distribution to get the GRUB2 loader name for.
"""
loader_formats = self.api.settings().bootloaders_formats
grub_binary_names: Dict[str, str] = {}
for loader_format, values in loader_formats.items():
name = values.get("binary_name", None)
if name is not None and isinstance(name, str):
grub_binary_names[loader_format.lower()] = name
if desired_arch in (Archs.PPC, Archs.PPC64, Archs.PPC64LE, Archs.PPC64EL):
# GRUB can boot all Power architectures it supports via the following modules directory.
return grub_binary_names["powerpc-ieee1275"]
if desired_arch == Archs.AARCH64:
# GRUB has only one 64-bit variant it can boot, the name is different how we have named it in Cobbler.
return grub_binary_names["arm64-efi"]
if desired_arch == Archs.ARM:
# GRUB has only one 32-bit variant it can boot, the name is different how we have named it in Cobbler.
return grub_binary_names["arm"]
# Now we do the regular stuff: We map the beginning of the Cobbler arch and try to find suitable loaders.
# We do want to drop "grub.0" always as it is not efi bootable.
matches = {
k: v
for (k, v) in grub_binary_names.items()
if k.startswith(desired_arch.value) and v != "grub.0"
}
if len(matches) == 0:
raise ValueError(
f'No matches found for requested Cobbler Arch: "{str(desired_arch.value)}"'
)
if len(matches) == 1:
return next(iter(matches.values()))
raise ValueError(
f'Ambiguous matches for GRUB to Cobbler Arch mapping! Requested: "{str(desired_arch.value)}"'
f' Found: "{str(matches.values())}"'
)
def _write_isolinux_cfg(
self, cfg_parts: List[str], output_dir: pathlib.Path
) -> None:
"""Write isolinux.cfg.
:param cfg_parts: List of str that is written to the config, joined by newlines.
:param output_dir: pathlib.Path that the isolinux.cfg file is written into.
"""
output_file = output_dir / "isolinux.cfg"
self.logger.info("Writing %s", output_file)
with open(output_file, "w") as f:
f.write("\n".join(cfg_parts))
def _write_grub_cfg(self, cfg_parts: List[str], output_dir: pathlib.Path) -> None:
"""Write grub.cfg.
:param cfg_parts: List of str that is written to the config, joined by newlines.
:param output_dir: pathlib.Path that the grub.cfg file is written into.
"""
output_file = output_dir / "grub.cfg"
self.logger.info("Writing %s", output_file)
with open(output_file, "w") as f:
f.write("\n".join(cfg_parts))
def _write_bootinfo(self, bootinfo_txt: str, output_dir: pathlib.Path) -> None:
"""Write ppc/bootinfo.txt
:param bootinfo_parts: List of str that is written to the config, joined by newlines.
:param output_dir: pathlib.Path that the bootinfo.txt is written into.
"""
output_file = output_dir / "bootinfo.txt"
self.logger.info("Writing %s", output_file)
with open(output_file, "w") as f:
f.write(bootinfo_txt)
def _create_esp_image_file(self, tmpdir: str) -> str:
esp = pathlib.Path(tmpdir) / "efi"
mkfs_cmd = ["mkfs.fat", "-C", str(esp), "3528"]
rc = utils.subprocess_call(mkfs_cmd, shell=False)
if rc != 0:
self.logger.error("Could not create ESP image file")
raise Exception # TODO: use proper exception
return str(esp)
def _create_efi_boot_dir(self, esp_mountpoint: str) -> str:
efi_boot = pathlib.Path("EFI") / "BOOT"
self.logger.info("Creating %s", efi_boot)
filesystem_helpers.mkdirimage(efi_boot, esp_mountpoint)
return str(efi_boot)
def _find_esp(self, root_dir: pathlib.Path) -> Optional[str]:
"""Walk root directory and look for an ESP."""
candidates = [str(match) for match in root_dir.glob("**/efi")]
if len(candidates) == 0:
return None
elif len(candidates) == 1:
return candidates[0]
else:
self.logger.info(
"Found multiple ESP (%s), choosing %s", candidates, candidates[0]
)
return candidates[0]
def _prepare_buildisodir(self, buildisodir: str = "") -> str:
"""
This validated the path and type of the buildiso directory and then (re-)creates the apropiate directories.
:param buildisodir: The directory in which the build of the ISO takes place. If an empty string then the default
directory is used.
:raises ValueError: In case the specified directory does not exist.
:raises TypeError: In case the specified argument is not of type str.
:return: The validated and normalized directory with appropriate subfolders provisioned.
"""
if not isinstance(buildisodir, str): # type: ignore
raise TypeError("buildisodir needs to be of type str!")
if not buildisodir:
buildisodir = self.api.settings().buildisodir
else:
if not os.path.isdir(buildisodir):
raise ValueError("The --tempdir specified is not a directory")
(_, buildisodir_tail) = os.path.split(os.path.normpath(buildisodir))
if buildisodir_tail != "buildiso":
buildisodir = os.path.join(buildisodir, "buildiso")
self.logger.info('Deleting and recreating the buildisodir at "%s"', buildisodir)
if os.path.exists(buildisodir):
shutil.rmtree(buildisodir)
os.makedirs(buildisodir)
self.isolinuxdir = os.path.join(buildisodir, "isolinux")
return buildisodir
def create_buildiso_dirs_x86_64(self, buildiso_root: str) -> BuildisoDirsX86_64:
"""Create directories in the buildiso root.
Layout:
.
├── autoinstall
├── EFI
│ └── BOOT
├── isolinux
└── repo_mirror
"""
root = pathlib.Path(buildiso_root)
isolinuxdir = root / "isolinux"
grubdir = root / "EFI" / "BOOT"
autoinstalldir = root / "autoinstall"
repodir = root / "repo_mirror"
for d in [isolinuxdir, grubdir, autoinstalldir, repodir]:
d.mkdir(parents=True)
return BuildisoDirsX86_64(
root=root,
isolinux=isolinuxdir,
grub=grubdir,
autoinstall=autoinstalldir,
repo=repodir,
)
def create_buildiso_dirs_ppc64le(self, buildiso_root: str) -> BuildisoDirsPPC64LE:
"""Create directories in the buildiso root.
Layout:
.
├── autoinstall
├── boot
├── ppc
└── repo_mirror
"""
root = pathlib.Path(buildiso_root)
grubdir = root / "boot"
ppcdir = root / "ppc"
autoinstalldir = root / "autoinstall"
repodir = root / "repo_mirror"
for _d in [grubdir, ppcdir, autoinstalldir, repodir]:
_d.mkdir(parents=True)
return BuildisoDirsPPC64LE(
root=root,
grub=grubdir,
ppc=ppcdir,
autoinstall=autoinstalldir,
repo=repodir,
)
def _xorriso_ppc64le(
self,
xorrisofs_opts: str,
iso: str,
buildisodir: str,
esp_path: str = "",
):
"""
Build the final xorrisofs command which is then executed on the disk.
:param xorrisofs_opts: The additional options for xorrisofs.
:param iso: The name of the output iso.
:param buildisodir: The directory in which we build the ISO.
"""
del esp_path # just accepted for polymorphism
cmd = [
"xorriso",
"-as",
"mkisofs",
]
if xorrisofs_opts != "":
cmd.append(xorrisofs_opts)
cmd.extend(
[
"-chrp-boot",
"-hfs-bless-by",
"p",
"boot",
"-V",
"COBBLER_INSTALL",
"-o",
iso,
buildisodir,
]
)
xorrisofs_return_code = utils.subprocess_call(cmd, shell=False)
if xorrisofs_return_code != 0:
self.logger.error("xorrisofs failed with non zero exit code!")
return
self.logger.info("ISO build complete")
self.logger.info("You may wish to delete: %s", buildisodir)
self.logger.info("The output file is: %s", iso)
def _xorriso_x86_64(
self, xorrisofs_opts: str, iso: str, buildisodir: str, esp_path: str
):
"""
Build the final xorrisofs command which is then executed on the disk.
:param xorrisofs_opts: The additional options for xorrisofs.
:param iso: The name of the output iso.
:param buildisodir: The directory in which we build the ISO.
:param esp_path: The absolute path to the EFI system partition.
"""
running_on, _ = utils.os_release()
if running_on in ("suse", "centos", "virtuozzo", "redhat"):
isohdpfx_location = pathlib.Path(self.api.settings().syslinux_dir).joinpath(
"isohdpfx.bin"
)
else:
isohdpfx_location = pathlib.Path(self.api.settings().syslinux_dir).joinpath(
"mbr/isohdpfx.bin"
)
esp_relative_path = pathlib.Path(esp_path).relative_to(buildisodir)
cmd = [
"xorriso",
"-as",
"mkisofs",
]
if xorrisofs_opts != "":
cmd.append(xorrisofs_opts)
cmd += [
"-isohybrid-mbr",
str(isohdpfx_location),
"-c",
"isolinux/boot.cat",
"-b",
"isolinux/isolinux.bin",
"-no-emul-boot",
"-boot-load-size",
"4",
"-boot-info-table",
"-eltorito-alt-boot",
"-e",
str(esp_relative_path),
"-no-emul-boot",
"-isohybrid-gpt-basdat",
"-V",
"COBBLER_INSTALL",
"-o",
iso,
buildisodir,
]
xorrisofs_return_code = utils.subprocess_call(cmd, shell=False)
if xorrisofs_return_code != 0:
self.logger.error("xorrisofs failed with non zero exit code!")
return
self.logger.info("ISO build complete")
self.logger.info("You may wish to delete: %s", buildisodir)
self.logger.info("The output file is: %s", iso)
| 25,266
|
Python
|
.py
| 581
| 33.075731
| 120
| 0.599283
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,194
|
standalone.py
|
cobbler_cobbler/cobbler/actions/buildiso/standalone.py
|
"""
This module contains the specific code for generating standalone or airgapped ISOs.
"""
# SPDX-License-Identifier: GPL-2.0-or-later
import itertools
import os
import pathlib
import re
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Union
from cobbler import utils
from cobbler.actions import buildiso
from cobbler.actions.buildiso import Autoinstall, BootFilesCopyset, LoaderCfgsParts
from cobbler.enums import Archs
from cobbler.utils import filesystem_helpers
if TYPE_CHECKING:
from cobbler.items.distro import Distro
from cobbler.items.profile import Profile
from cobbler.items.system import System
CDREGEX = re.compile(r"^\s*url .*\n", re.IGNORECASE | re.MULTILINE)
def _generate_append_line_standalone(
data: Dict[Any, Any], distro: "Distro", descendant: Union["Profile", "System"]
) -> str:
"""
Generates the append line for the kernel so the installation can be done unattended.
:param data: The values for the append line. The key "kernel_options" must be present.
:param distro: The distro object to generate the append line from.
:param descendant: The profile or system which is underneath the distro.
:return: The base append_line which we need for booting the built ISO. Contains initrd and autoinstall parameter.
"""
append_line = f" APPEND initrd=/{os.path.basename(distro.initrd)}"
if distro.breed == "redhat":
if distro.os_version in ["rhel4", "rhel5", "rhel6", "fedora16"]:
append_line += f" ks=cdrom:/autoinstall/{descendant.name}.cfg repo=cdrom"
else:
append_line += (
f" inst.ks=cdrom:/autoinstall/{descendant.name}.cfg inst.repo=cdrom"
)
elif distro.breed == "suse":
append_line += (
f" autoyast=file:///autoinstall/{descendant.name}.cfg install=cdrom:///"
)
if "install" in data["kernel_options"]:
del data["kernel_options"]["install"]
elif distro.breed in ["ubuntu", "debian"]:
append_line += f" auto-install/enable=true preseed/file=/cdrom/autoinstall/{descendant.name}.cfg"
# add remaining kernel_options to append_line
append_line += buildiso.add_remaining_kopts(data["kernel_options"])
return append_line
class StandaloneBuildiso(buildiso.BuildIso):
"""
This class contains all functionality related to building self-contained installation images.
"""
def _write_autoinstall_cfg(
self, data: Dict[str, Autoinstall], output_dir: pathlib.Path
):
self.logger.info("Writing auto-installation config files")
self.logger.debug(data)
for file_name, autoinstall in data.items():
with open(output_dir / f"{file_name}.cfg", "w") as f:
f.write(autoinstall.config)
def _generate_descendant_config(
self,
descendant: Union["Profile", "System"],
menu_indent: int,
distro: "Distro",
append_line: str,
) -> Tuple[str, str, BootFilesCopyset]:
kernel_path = f"/{os.path.basename(distro.kernel)}"
initrd_path = f"/{os.path.basename(distro.initrd)}"
isolinux_cfg = self._render_isolinux_entry(
append_line,
menu_name=descendant.name,
kernel_path=kernel_path,
menu_indent=menu_indent,
)
grub_cfg = self._render_grub_entry(
append_line,
menu_name=distro.name,
kernel_path=kernel_path,
initrd_path=initrd_path,
)
return (
isolinux_cfg,
grub_cfg,
BootFilesCopyset(distro.kernel, distro.initrd, ""),
)
def validate_repos(
self, profile_name: str, repo_names: List[str], repo_mirrordir: pathlib.Path
):
"""Sanity checks for repos to sync.
This function checks that repos are known to cobbler and have a local mirror directory.
Raises ValueError if any repo fails the validation.
"""
for repo_name in repo_names:
repo_obj = self.api.find_repo(name=repo_name)
if repo_obj is None or isinstance(repo_obj, list):
raise ValueError(
f"Repository {repo_name}, referenced by {profile_name}, not found or ambiguous."
)
if not repo_obj.mirror_locally:
raise ValueError(
f"Repository {repo_name} is not configured for local mirroring."
)
if not repo_mirrordir.joinpath(repo_name).exists():
raise ValueError(
f"Local mirror directory missing for repository {repo_name}"
)
def _generate_item(
self,
descendant_obj: Union["Profile", "System"],
distro_obj: "Distro",
airgapped: bool,
cfg_parts: LoaderCfgsParts,
repo_mirrordir: pathlib.Path,
autoinstall_data: Dict[str, Any],
):
data: Dict[Any, Any] = utils.blender(self.api, False, descendant_obj)
utils.kopts_overwrite(
data["kernel_options"], self.api.settings().server, distro_obj.breed
)
append_line = _generate_append_line_standalone(data, distro_obj, descendant_obj)
name = descendant_obj.name
config_args: Dict[str, Any] = {
"descendant": descendant_obj,
"distro": distro_obj,
"append_line": append_line,
}
if descendant_obj.COLLECTION_TYPE == "profile":
config_args.update({"menu_indent": 0})
autoinstall_args = {"profile": descendant_obj}
else: # system
config_args.update({"menu_indent": 4})
autoinstall_args = {"system": descendant_obj}
isolinux, grub, to_copy = self._generate_descendant_config(**config_args)
autoinstall = self.api.autoinstallgen.generate_autoinstall(**autoinstall_args) # type: ignore
if distro_obj.breed == "redhat":
autoinstall = CDREGEX.sub("cdrom\n", autoinstall, count=1)
repos: List[str] = []
if airgapped:
repos = data.get("repos", [])
if repos:
self.validate_repos(name, repos, repo_mirrordir)
autoinstall = re.sub(
rf"^(\s*repo --name=\S+ --baseurl=).*/cobbler/distro_mirror/{distro_obj.name}/?(.*)",
rf"\1 file:///mnt/source/repo_mirror/\2",
autoinstall,
re.MULTILINE,
)
autoinstall = self._update_repos_in_autoinstall_data(autoinstall, repos)
cfg_parts.isolinux.append(isolinux)
cfg_parts.grub.append(grub)
cfg_parts.bootfiles_copysets.append(to_copy)
autoinstall_data[name] = Autoinstall(autoinstall, repos)
def _update_repos_in_autoinstall_data(
self, autoinstall_data: str, repos_names: List[str]
) -> str:
for repo_name in repos_names:
autoinstall_data = re.sub(
rf"^(\s*repo --name={repo_name} --baseurl=).*",
rf"\1 file:///mnt/source/repo_mirror/{repo_name}",
autoinstall_data,
re.MULTILINE,
)
return autoinstall_data
def _copy_distro_files(self, filesource: str, output_dir: str):
"""Copy the distro tree in filesource to output_dir.
:param filesource: Path to root of the distro source tree.
:param output_dir: Path to the directory into which to copy all files.
:raises RuntimeError: rsync command failed.
"""
cmd = [
"rsync",
"-rlptgu",
"--exclude",
"boot.cat",
"--exclude",
"TRANS.TBL",
"--exclude",
"isolinux/",
f"{filesource}/",
f"{output_dir}/",
]
self.logger.info("- copying distro files (%s)", cmd)
rc = utils.subprocess_call(cmd, shell=False)
if rc != 0:
raise RuntimeError("rsync of distro files failed")
def _copy_repos(
self,
autoinstall_data: Iterable[Autoinstall],
source_dir: pathlib.Path,
output_dir: pathlib.Path,
):
"""Copy repos for airgapped ISOs.
The caller of this function has to check if an airgapped ISO is built.
:param autoinstall_data: Iterable of Autoinstall records that contain the lists of repos.
:param source_dir: Path to the directory containing the repos.
:param output_dir: Path to the directory into which to copy all files.
:raises RuntimeError: rsync command failed.
"""
for repo in itertools.chain.from_iterable(ai.repos for ai in autoinstall_data):
self.logger.info(" - copying repo '%s' for airgapped iso", repo)
cmd = [
"rsync",
"-rlptgu",
"--exclude",
"boot.cat",
"--exclude",
"TRANS.TBL",
f"{source_dir / repo}/",
str(output_dir),
]
rc = utils.subprocess_call(cmd, shell=False)
if rc != 0:
raise RuntimeError(f"Copying of repo {repo} failed.")
def run(
self,
iso: str = "autoinst.iso",
buildisodir: str = "",
profiles: Optional[List[str]] = None,
xorrisofs_opts: str = "",
distro_name: str = "",
airgapped: bool = False,
source: str = "",
**kwargs: Any,
):
"""
Run the whole iso generation from bottom to top. Per default this builds an ISO for all available systems
and profiles.
This is the only method which should be called from non-class members. The ``profiles`` and ``system``
parameters can be combined.
:param iso: The name of the iso. Defaults to "autoinst.iso".
:param buildisodir: This overwrites the directory from the settings in which the iso is built in.
:param profiles: The filter to generate the ISO only for selected profiles.
:param xorrisofs_opts: ``xorrisofs`` options to include additionally.
:param distro_name: For detecting the architecture of the ISO.
:param airgapped: This option implies ``standalone=True``.
:param source: If the iso should be offline available this is the path to the sources of the image.
"""
del kwargs # just accepted for polymorphism
distro_obj = self.parse_distro(distro_name)
if distro_obj.arch not in (
Archs.X86_64,
Archs.PPC,
Archs.PPC64,
Archs.PPC64LE,
Archs.PPC64EL,
):
raise ValueError(
"cobbler buildiso does not work for arch={distro_obj.arch}"
)
profile_objs = self.parse_profiles(profiles, distro_obj)
filesource = source
loader_config_parts = LoaderCfgsParts([self.iso_template], [], [])
autoinstall_data: Dict[str, Autoinstall] = {}
buildisodir = self._prepare_buildisodir(buildisodir)
repo_mirrordir = pathlib.Path(self.api.settings().webdir) / "repo_mirror"
distro_mirrordir = pathlib.Path(self.api.settings().webdir) / "distro_mirror"
esp_location = ""
xorriso_func = None
buildiso_dirs = None
# generate configs, list of repos, and autoinstall data
for profile_obj in profile_objs:
self._generate_item(
descendant_obj=profile_obj,
distro_obj=distro_obj,
airgapped=airgapped,
cfg_parts=loader_config_parts,
repo_mirrordir=repo_mirrordir,
autoinstall_data=autoinstall_data,
)
for descendant in profile_obj.descendants:
# handle everything below this top-level profile
self._generate_item(
descendant_obj=descendant, # type: ignore
distro_obj=distro_obj,
airgapped=airgapped,
cfg_parts=loader_config_parts,
repo_mirrordir=repo_mirrordir,
autoinstall_data=autoinstall_data,
)
if distro_obj.arch == Archs.X86_64:
xorriso_func = self._xorriso_x86_64
buildiso_dirs = self.create_buildiso_dirs_x86_64(buildisodir)
# fill temporary directory with arch-specific binaries
self._copy_isolinux_files()
# create EFI system partition (ESP) if needed, uses the ESP from the
# distro if it was copied
esp_location = self._find_esp(buildiso_dirs.root)
if esp_location is None:
esp_location = self._create_esp_image_file(buildisodir)
self._copy_grub_into_esp(esp_location, distro_obj.arch)
self._write_grub_cfg(loader_config_parts.grub, buildiso_dirs.grub)
self._write_isolinux_cfg(
loader_config_parts.isolinux, buildiso_dirs.isolinux
)
elif distro_obj.arch in (Archs.PPC, Archs.PPC64, Archs.PPC64LE, Archs.PPC64EL):
xorriso_func = self._xorriso_ppc64le
buildiso_dirs = self.create_buildiso_dirs_ppc64le(buildisodir)
grub_bin = (
pathlib.Path(self.api.settings().bootloaders_dir)
/ "grub"
/ "grub.ppc64le"
)
bootinfo_txt = self._render_bootinfo_txt(distro_name)
# fill temporary directory with arch-specific binaries
filesystem_helpers.copyfile(
str(grub_bin), str(buildiso_dirs.grub / "grub.elf")
)
self._write_bootinfo(bootinfo_txt, buildiso_dirs.ppc)
self._write_grub_cfg(loader_config_parts.grub, buildiso_dirs.grub)
else:
raise ValueError(
"cobbler buildiso does not work for arch={distro_obj.arch}"
)
if not filesource:
filesource = self._find_distro_source(
distro_obj.kernel, str(distro_mirrordir)
)
# copy kernels, initrds, and distro files (e.g. installer)
self._copy_distro_files(filesource, str(buildiso_dirs.root))
for copyset in loader_config_parts.bootfiles_copysets:
self._copy_boot_files(
copyset.src_kernel,
copyset.src_initrd,
str(buildiso_dirs.root),
copyset.new_filename,
)
# sync repos
if airgapped:
buildiso_dirs.repo.mkdir(exist_ok=True)
self._copy_repos(
autoinstall_data.values(), repo_mirrordir, buildiso_dirs.repo
)
self._write_autoinstall_cfg(autoinstall_data, buildiso_dirs.autoinstall)
xorriso_func(xorrisofs_opts, iso, buildisodir, esp_location)
| 14,928
|
Python
|
.py
| 335
| 33.641791
| 117
| 0.600563
|
cobbler/cobbler
| 2,597
| 653
| 318
|
GPL-2.0
|
9/5/2024, 5:11:26 PM (Europe/Amsterdam)
|
12,195
|
mypy.ini
|
SirVer_ultisnips/mypy.ini
|
# Global options:
[mypy]
python_version = 3.7
warn_return_any = True
warn_unused_configs = True
mypy_path=pythonx/UltiSnips
[mypy-vim]
ignore_missing_imports = True
[mypy-unidecode]
ignore_missing_imports = True
| 215
|
Python
|
.py
| 10
| 20.2
| 29
| 0.80198
|
SirVer/ultisnips
| 7,501
| 688
| 112
|
GPL-3.0
|
9/5/2024, 5:11:34 PM (Europe/Amsterdam)
|
12,196
|
test_all.py
|
SirVer_ultisnips/test_all.py
|
#!/usr/bin/env python3
# encoding: utf-8
#
# See CONTRIBUTING.md for an explanation of this file.
#
# NOTE: The test suite is not working under Windows right now as I have no
# access to a windows system for fixing it. Volunteers welcome. Here are some
# comments from the last time I got the test suite running under windows.
#
# Under windows, COM's SendKeys is used to send keystrokes to the gvim window.
# Note that Gvim must use english keyboard input (choose in windows registry)
# for this to work properly as SendKeys is a piece of chunk. (i.e. it sends
# <F13> when you send a | symbol while using german key mappings)
# pylint: skip-file
import os
import platform
import subprocess
import unittest
from test.vim_interface import (
create_directory,
tempfile,
VimInterfaceTmux,
VimInterfaceTmuxNeovim,
)
def plugin_cache_dir():
"""The directory that we check out our bundles to."""
return os.path.join(tempfile.gettempdir(), "UltiSnips_test_vim_plugins")
def clone_plugin(plugin):
"""Clone the given plugin into our plugin directory."""
dirname = os.path.join(plugin_cache_dir(), os.path.basename(plugin))
print("Cloning %s -> %s" % (plugin, dirname))
if os.path.exists(dirname):
print("Skip cloning of %s. Already there." % plugin)
return
create_directory(dirname)
subprocess.call(
[
"git",
"clone",
"--recursive",
"--depth",
"1",
"https://github.com/%s" % plugin,
dirname,
]
)
if plugin == "Valloric/YouCompleteMe":
# CLUTCH: this plugin needs something extra.
subprocess.call(os.path.join(dirname, "./install.sh"), cwd=dirname)
def setup_other_plugins(all_plugins):
"""Creates /tmp/UltiSnips_test_vim_plugins and clones all plugins into
this."""
clone_plugin("tpope/vim-pathogen")
for plugin in all_plugins:
clone_plugin(plugin)
if __name__ == "__main__":
import optparse
import sys
def parse_args():
p = optparse.OptionParser("%prog [OPTIONS] <test case names to run>")
p.set_defaults(
session="vim", interrupt=False, verbose=False, retries=4, plugins=False
)
p.add_option(
"-v",
"--verbose",
dest="verbose",
action="store_true",
help="print name of tests as they are executed",
)
p.add_option(
"--clone-plugins",
action="store_true",
help="Only clones dependant plugins and exits the test runner.",
)
p.add_option(
"--plugins",
action="store_true",
help="Run integration tests with other Vim plugins.",
)
p.add_option(
"-s",
"--session",
dest="session",
metavar="SESSION",
help="session parameters for the terminal multiplexer SESSION [%default]",
)
p.add_option(
"-i",
"--interrupt",
dest="interrupt",
action="store_true",
help="Stop after defining the snippet. This allows the user "
"to interactively test the snippet in vim. You must give "
"exactly one test case on the cmdline. The test will always fail.",
)
p.add_option(
"-r",
"--retries",
dest="retries",
type=int,
help="How often should each test be retried before it is "
"considered failed. Works around flakyness in the terminal "
"multiplexer and race conditions in writing to the file system.",
)
p.add_option(
"-f",
"--failfast",
dest="failfast",
action="store_true",
help="Stop the test run on the first error or failure.",
)
p.add_option(
"--vim",
dest="vim",
type=str,
default="vim",
help="executable to run when launching vim.",
)
p.add_option(
"--interface",
dest="interface",
type=str,
default="tmux",
help="Interface to use. Use 'tmux' with vanilla Vim and 'tmux_nvim' "
"with Neovim.",
)
p.add_option(
"--python-host-prog",
dest="python_host_prog",
type=str,
default="",
help="Neovim needs a variable to tell it which python interpretor to use for "
"py blocks. This needs to be set to point to the correct python interpretor. "
"It is ignored for vanilla Vim.",
)
p.add_option(
"--expected-python-version",
dest="expected_python_version",
type=str,
default="",
help="If set, each test will check sys.version inside of vim to "
"verify we are testing against the expected Python version.",
)
p.add_option(
"--remote-pdb",
dest="pdb_enable",
action="store_true",
help="If set, The remote pdb server will be run",
)
p.add_option(
"--remote-pdb-host",
dest="pdb_host",
type=str,
default="localhost",
help="Remote pdb server host",
)
p.add_option(
"--remote-pdb-port",
dest="pdb_port",
type=int,
default=8080,
help="Remote pdb server port",
)
p.add_option(
"--remote-pdb-non-blocking",
dest="pdb_block",
action="store_false",
help="If set, the server will not freeze vim on error",
)
o, args = p.parse_args()
return o, args
def flatten_test_suite(suite):
flatten = unittest.TestSuite()
for test in suite:
if isinstance(test, unittest.TestSuite):
flatten.addTests(flatten_test_suite(test))
else:
flatten.addTest(test)
return flatten
def main():
options, selected_tests = parse_args()
all_test_suites = unittest.defaultTestLoader.discover(start_dir="test")
has_nvim = subprocess.check_output(
[options.vim, "-e", "-s", "-c", "verbose echo has('nvim')", "+q"],
stderr=subprocess.STDOUT,
)
if has_nvim == b"0":
vim_flavor = "vim"
elif has_nvim == b"1":
vim_flavor = "neovim"
else:
assert 0, "Unexpected output, has_nvim=%r" % has_nvim
if options.interface == "tmux":
assert vim_flavor == "vim", (
"Interface is tmux, but vim_flavor is %s" % vim_flavor
)
vim = VimInterfaceTmux(options.vim, options.session)
else:
assert vim_flavor == "neovim", (
"Interface is TmuxNeovim, but vim_flavor is %s" % vim_flavor
)
vim = VimInterfaceTmuxNeovim(options.vim, options.session)
if not options.clone_plugins and platform.system() == "Windows":
raise RuntimeError(
"TODO: TestSuite is broken under windows. Volunteers wanted!."
)
# vim = VimInterfaceWindows()
# vim.focus()
all_other_plugins = set()
tests = set()
suite = unittest.TestSuite()
for test in flatten_test_suite(all_test_suites):
test.interrupt = options.interrupt
test.retries = options.retries
test.test_plugins = options.plugins
test.python_host_prog = options.python_host_prog
test.expected_python_version = options.expected_python_version
test.vim = vim
test.vim_flavor = vim_flavor
test.pdb_enable = options.pdb_enable
test.pdb_host = options.pdb_host
test.pdb_port = options.pdb_port
test.pdb_block = options.pdb_block
all_other_plugins.update(test.plugins)
if len(selected_tests):
id = test.id().split(".")[1]
if not any([id.startswith(t) for t in selected_tests]):
continue
tests.add(test)
suite.addTests(tests)
if options.plugins or options.clone_plugins:
setup_other_plugins(all_other_plugins)
if options.clone_plugins:
return
v = 2 if options.verbose else 1
successfull = (
unittest.TextTestRunner(verbosity=v, failfast=options.failfast)
.run(suite)
.wasSuccessful()
)
return 0 if successfull else 1
sys.exit(main())
| 8,803
|
Python
|
.py
| 243
| 25.99177
| 90
| 0.56083
|
SirVer/ultisnips
| 7,501
| 688
| 112
|
GPL-3.0
|
9/5/2024, 5:11:34 PM (Europe/Amsterdam)
|
12,197
|
test_Plugin.py
|
SirVer_ultisnips/test/test_Plugin.py
|
import sys
from test.vim_test_case import VimTestCase as _VimTest
from test.constant import *
class Plugin_SuperTab_SimpleTest(_VimTest):
plugins = ["ervandew/supertab"]
snippets = ("long", "Hello", "", "w")
keys = (
"longtextlongtext\n" + "longt" + EX + "\n" + "long" + EX # Should complete word
) # Should expand
wanted = "longtextlongtext\nlongtextlongtext\nHello"
def _before_test(self):
# Make sure that UltiSnips has the keymap
self.vim.send_to_vim(":call UltiSnips#map_keys#MapKeys()\n")
def _extra_vim_config(self, vim_config):
assert EX == "\t" # Otherwise this test needs changing.
vim_config.append('let g:SuperTabDefaultCompletionType = "<c-p>"')
vim_config.append('let g:SuperTabRetainCompletionDuration = "insert"')
vim_config.append("let g:SuperTabLongestHighlight = 1")
vim_config.append("let g:SuperTabCrMapping = 0")
| 935
|
Python
|
.py
| 19
| 43.105263
| 88
| 0.673985
|
SirVer/ultisnips
| 7,501
| 688
| 112
|
GPL-3.0
|
9/5/2024, 5:11:34 PM (Europe/Amsterdam)
|
12,198
|
test_Folding.py
|
SirVer_ultisnips/test/test_Folding.py
|
from test.vim_test_case import VimTestCase as _VimTest
from test.constant import *
class FoldingEnabled_SnippetWithFold_ExpectNoFolding(_VimTest):
def _extra_vim_config(self, vim_config):
vim_config.append("set foldlevel=0")
vim_config.append("set foldmethod=marker")
snippets = (
"test",
r"""Hello {{{
${1:Welt} }}}""",
)
keys = "test" + EX + "Ball"
wanted = """Hello {{{
Ball }}}"""
class FoldOverwrite_Simple_ECR(_VimTest):
snippets = (
"fold",
"""# ${1:Description} `!p snip.rv = vim.eval("&foldmarker").split(",")[0]`
# End: $1 `!p snip.rv = vim.eval("&foldmarker").split(",")[1]`""",
)
keys = "fold" + EX + "hi"
wanted = "# hi {{{\n\n# End: hi }}}"
class Fold_DeleteMiddleLine_ECR(_VimTest):
snippets = (
"fold",
"""# ${1:Description} `!p snip.rv = vim.eval("&foldmarker").split(",")[0]`
# End: $1 `!p snip.rv = vim.eval("&foldmarker").split(",")[1]`""",
)
keys = "fold" + EX + "hi" + ESC + "jdd"
wanted = "# hi {{{\n\n# End: hi }}}"
class PerlSyntaxFold(_VimTest):
def _extra_vim_config(self, vim_config):
vim_config.append("set foldlevel=0")
vim_config.append("syntax enable")
vim_config.append("set foldmethod=syntax")
vim_config.append("let g:perl_fold = 1")
vim_config.append("so $VIMRUNTIME/syntax/perl.vim")
snippets = (
"test",
r"""package ${1:`!v printf('c%02d', 3)`};
${0}
1;""",
)
keys = "test" + EX + JF + "sub junk {}"
wanted = "package c03;\nsub junk {}\n1;"
| 1,592
|
Python
|
.py
| 45
| 29.822222
| 83
| 0.563885
|
SirVer/ultisnips
| 7,501
| 688
| 112
|
GPL-3.0
|
9/5/2024, 5:11:34 PM (Europe/Amsterdam)
|
12,199
|
vim_test_case.py
|
SirVer_ultisnips/test/vim_test_case.py
|
# encoding: utf-8
# pylint: skip-file
import os
import subprocess
import tempfile
import textwrap
import time
import unittest
from test.constant import SEQUENCES, EX
from test.vim_interface import create_directory, TempFileManager
def plugin_cache_dir():
"""The directory that we check out our bundles to."""
return os.path.join(tempfile.gettempdir(), "UltiSnips_test_vim_plugins")
class VimTestCase(unittest.TestCase, TempFileManager):
snippets = ()
files = {}
text_before = " --- some text before --- \n\n"
text_after = "\n\n --- some text after --- "
expected_error = ""
wanted = ""
keys = ""
sleeptime = 0.00
output = ""
plugins = []
# Skip this test for the given reason or None for not skipping it.
skip_if = lambda self: None
version = None # Will be set to vim --version output
maxDiff = None # Show all diff output, always.
vim_flavor = None # will be 'vim' or 'neovim'.
expected_python_version = (
None # If set, we need to check that our Vim is running this python version.
)
def __init__(self, *args, **kwargs):
unittest.TestCase.__init__(self, *args, **kwargs)
TempFileManager.__init__(self, "Case")
def runTest(self):
if self.expected_python_version:
self.assertEqual(self.in_vim_python_version, self.expected_python_version)
# Only checks the output. All work is done in setUp().
wanted = self.text_before + self.wanted + self.text_after
SLEEPTIMES = [0.01, 0.15, 0.3, 0.4, 0.5, 1]
for i in range(self.retries):
if self.output and self.expected_error:
self.assertRegexpMatches(self.output, self.expected_error)
return
if self.output != wanted or self.output is None:
# Redo this, but slower
self.sleeptime = SLEEPTIMES[min(i, len(SLEEPTIMES) - 1)]
self.tearDown()
self.setUp()
self.assertMultiLineEqual(self.output, wanted)
def _extra_vim_config(self, vim_config):
"""Adds extra lines to the vim_config list."""
def _before_test(self):
"""Send these keys before the test runs.
Used for buffer local variables and other options.
"""
def _create_file(self, file_path, content):
"""Creates a file in the runtimepath that is created for this test.
Returns the absolute path to the file.
"""
return self.write_temp(file_path, textwrap.dedent(content + "\n"))
def _link_file(self, source, relative_destination):
"""Creates a link from 'source' to the 'relative_destination' in our
temp dir."""
absdir = self.name_temp(relative_destination)
create_directory(absdir)
os.symlink(source, os.path.join(absdir, os.path.basename(source)))
def setUp(self):
if not VimTestCase.version:
VimTestCase.version, _ = subprocess.Popen(
[self.vim.vim_executable, "--version"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
).communicate()
VimTestCase.version = VimTestCase.version.decode("utf-8")
if self.plugins and not self.test_plugins:
return self.skipTest("Not testing integration with other plugins.")
reason_for_skipping = self.skip_if()
if reason_for_skipping is not None:
return self.skipTest(reason_for_skipping)
vim_config = []
vim_config.append("set nocompatible")
vim_config.append(
"set runtimepath=$VIMRUNTIME,%s,%s"
% (os.path.dirname(os.path.dirname(__file__)), self._temp_dir)
)
if self.plugins:
self._link_file(
os.path.join(plugin_cache_dir(), "vim-pathogen", "autoload"), "."
)
for plugin in self.plugins:
self._link_file(
os.path.join(plugin_cache_dir(), os.path.basename(plugin)), "bundle"
)
vim_config.append("execute pathogen#infect()")
# Some configurations are unnecessary for vanilla Vim, but Neovim
# defines some defaults differently.
vim_config.append("syntax on")
vim_config.append("filetype plugin indent on")
vim_config.append("set nosmarttab")
vim_config.append("set noautoindent")
vim_config.append('set backspace=""')
vim_config.append('set clipboard=""')
vim_config.append("set encoding=utf-8")
vim_config.append("set fileencoding=utf-8")
vim_config.append("set buftype=nofile")
vim_config.append("set shortmess=at")
vim_config.append('let @" = ""')
assert EX == "\t" # Otherwise you need to change the next line
vim_config.append('let g:UltiSnipsExpandTrigger="<tab>"')
vim_config.append('let g:UltiSnipsJumpForwardTrigger="?"')
vim_config.append('let g:UltiSnipsJumpBackwardTrigger="+"')
vim_config.append('let g:UltiSnipsListSnippets="@"')
vim_config.append(
"let g:UltiSnipsDebugServerEnable={}".format(1 if self.pdb_enable else 0)
)
vim_config.append('let g:UltiSnipsDebugHost="{}"'.format(self.pdb_host))
vim_config.append("let g:UltiSnipsDebugPort={}".format(self.pdb_port))
vim_config.append(
"let g:UltiSnipsPMDebugBlocking={}".format(1 if self.pdb_block else 0)
)
# Work around https://github.com/vim/vim/issues/3117 for testing >
# py3.7 on Vim 8.1. Actually also reported against UltiSnips
# https://github.com/SirVer/ultisnips/issues/996
if "Vi IMproved 8.1" in self.version:
vim_config.append("silent! python3 1")
vim_config.append('let g:UltiSnipsSnippetDirectories=["us"]')
if self.python_host_prog:
vim_config.append('let g:python3_host_prog="%s"' % self.python_host_prog)
self._extra_vim_config(vim_config)
# Finally, add the snippets and some configuration for the test.
vim_config.append("py3 << EOF")
vim_config.append("from UltiSnips import UltiSnips_Manager\n")
if len(self.snippets) and not isinstance(self.snippets[0], tuple):
self.snippets = (self.snippets,)
for s in self.snippets:
sv, content = s[:2]
description = ""
options = ""
priority = 0
if len(s) > 2:
description = s[2]
if len(s) > 3:
options = s[3]
if len(s) > 4:
priority = s[4]
vim_config.append(
"UltiSnips_Manager.add_snippet(%r, %r, %r, %r, priority=%i)"
% (sv, content, description, options, priority)
)
# fill buffer with default text and place cursor in between.
prefilled_text = (self.text_before + self.text_after).splitlines()
vim_config.append("import vim\n")
vim_config.append("vim.current.buffer[:] = %r\n" % prefilled_text)
vim_config.append(
"vim.current.window.cursor = (max(len(vim.current.buffer)//2, 1), 0)"
)
# End of python stuff.
vim_config.append("EOF")
for name, content in self.files.items():
self._create_file(name, content)
self.in_vim_python_version = self.vim.launch(vim_config)
self._before_test()
if not self.interrupt:
# Go into insert mode and type the keys but leave Vim some time to
# react.
text = "i" + self.keys
while text:
to_send = None
for seq in SEQUENCES:
if text.startswith(seq):
to_send = seq
break
to_send = to_send or text[0]
self.vim.send_to_vim(to_send)
time.sleep(self.sleeptime)
text = text[len(to_send) :]
self.output = self.vim.get_buffer_data()
def tearDown(self):
if self.interrupt:
print("Working directory: %s" % (self._temp_dir))
return
self.vim.leave_with_wait()
self.clear_temp()
# vim:fileencoding=utf-8:
| 8,293
|
Python
|
.py
| 186
| 34.435484
| 88
| 0.599579
|
SirVer/ultisnips
| 7,501
| 688
| 112
|
GPL-3.0
|
9/5/2024, 5:11:34 PM (Europe/Amsterdam)
|