Initial commit

Signed-off-by: BigfootACA <bigfoot@classfun.cn>
This commit is contained in:
BigfootACA 2024-05-17 23:04:34 +08:00
commit e6c6ab6bf7
87 changed files with 7543 additions and 0 deletions

54
.gitignore vendored Normal file
View File

@ -0,0 +1,54 @@
*.rej
*.orig
*.swp
*.save*
*.o
*.a
*.out
*.lib
*.obj
*.dll
*.so
*.exe
*.gch
*.plist
*.mo
*.gmo
*.fd
*.iso
*.img
*.img.*
*.qcow2
*.vhd
*.vdi
*.vmdk
*.cpio
*.cpio.*
*.ttf
*.ttc
*.pcf
*.pcf.*
*.efi
*.db
vgcore.*
/build
initramfs*.*
initrd*.*
System.map*
/cmake-build-*
/.idea
/.vscode
/.cache
CMakeCache.txt
CMakeFiles
Makefile
cmake_install.cmake
node_modules
package-lock.json
fonts.scale
fonts.dir
/config.json
__pycache__
/configs/custom/*
/devices/custom/*
!.gitkeep

232
LICENSE Normal file
View File

@ -0,0 +1,232 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <https://www.gnu.org/philosophy/why-not-lgpl.html>.

8
build.py Executable file
View File

@ -0,0 +1,8 @@
#!/usr/bin/env python3
import sys
import os
if __name__ == '__main__':
sys.path.insert(0, os.path.realpath(os.path.dirname(__file__)))
from builder.main import main
main()

0
builder/__init__.py Normal file
View File

152
builder/build/bootstrap.py Normal file
View File

@ -0,0 +1,152 @@
import os
import shutil
from logging import getLogger
from builder.disk import image
from builder.build import mount, fstab, grub, user, filesystem
from builder.build import locale, systemd, mkinitcpio, names
from builder.build import pacman as pacman_build
from builder.component import pacman as pacman_comp
from builder.lib.context import ArchBuilderContext
log = getLogger(__name__)
def cleanup(ctx: ArchBuilderContext):
"""
Cleanup unneeded files for Arch Linux
"""
root = ctx.get_rootfs()
def rm_rf(path: str):
real = os.path.join(root, path)
if not os.path.exists(real): return
shutil.rmtree(real, True)
def del_child(path: str, prefix: str = None, suffix: str = None):
real = os.path.join(root, path)
if not os.path.exists(real): return
for file in os.listdir(real):
if prefix and not file.startswith(prefix): continue
if suffix and not file.endswith(suffix): continue
rm_rf(os.path.join(real, file))
rm_rf("var/log/pacman.log")
del_child("var/cache/pacman/pkg")
del_child("var/lib/pacman/sync")
del_child("etc", suffix="-")
rm_rf("etc/.pwd.lock")
def do_copy(ctx: ArchBuilderContext, src: str, dst: str):
"""
Copying rootfs via rsync
"""
rsrc = os.path.realpath(src)
rdst = os.path.realpath(dst)
log.info("start copying rootfs...")
ret = ctx.run_external([
"rsync", "--archive", "--recursive",
"--delete", "--info=progress2",
rsrc + os.sep, rdst
])
os.sync()
if ret != 0: raise OSError("rsync failed")
def build_rootfs(ctx: ArchBuilderContext):
"""
Build whole rootfs and generate image
"""
log.info("building rootfs")
# create folders
os.makedirs(ctx.work, mode=0o755, exist_ok=True)
os.makedirs(ctx.get_rootfs(), mode=0o0755, exist_ok=True)
os.makedirs(ctx.get_output(), mode=0o0755, exist_ok=True)
os.makedirs(ctx.get_mount(), mode=0o0755, exist_ok=True)
# build rootfs contents
if not ctx.repack:
try:
# initialize basic folders
mount.init_rootfs(ctx)
# initialize mount points for chroot
mount.init_mount(ctx)
# initialize pacman context
pacman = pacman_comp.Pacman(ctx)
# initialize build time keyring
pacman.init_keyring()
# trust pgp key in config (for pacman database)
pacman_build.trust_all(ctx, pacman)
# update pacman repos databases
pacman.load_databases()
# install all keyring packages before other packages
pacman_build.proc_pacman_keyring(ctx, pacman)
# real install all packages
pacman_build.proc_pacman(ctx, pacman)
# reload user databases after install packages
ctx.reload_passwd()
# create custom users and groups
user.proc_usergroup(ctx)
# build time files add/remove hooks
filesystem.proc_filesystem(ctx)
# enable / disable systemd units
systemd.proc_systemd(ctx)
# setup locale (timezone / i18n language / fonts / input methods)
locale.proc_locale(ctx)
# setup system names (environments / hosts / hostname / machine-info)
names.proc_names(ctx)
# recreate initramfs
mkinitcpio.proc_mkinitcpio(ctx)
# reset machine-id (never duplicated machine id)
systemd.proc_machine_id(ctx)
finally:
# kill spawned daemons (gpg-agent, dirmngr, ...)
ctx.cgroup.kill_all()
# remove mount points
mount.undo_mounts(ctx)
# cleanup unneeded files
cleanup(ctx)
# reload user database before create images
ctx.reload_passwd()
# create images and initialize bootloader
try:
# create disk and filesystem image
image.proc_image(ctx)
# generate fstab
fstab.proc_fstab(ctx)
# install grub bootloader
grub.proc_grub(ctx)
# running add files hooks (for bootloader settings)
filesystem.add_files_all(ctx, "post-fs")
# copy rootfs into image
do_copy(ctx, ctx.get_rootfs(), ctx.get_mount())
finally:
ctx.cleanup()
# finish
log.info("build done!")
os.sync()
log.info(f"your images are in {ctx.get_output()}")
ctx.run_external(["ls", "-lh", ctx.get_output()])

142
builder/build/filesystem.py Normal file
View File

@ -0,0 +1,142 @@
import os
import shutil
from logging import getLogger
from builder.lib import utils
from builder.component import user
from builder.lib.config import ArchBuilderConfigError
from builder.lib.context import ArchBuilderContext
log = getLogger(__name__)
def chroot_run(
ctx: ArchBuilderContext,
cmd: str | list[str],
cwd: str = None,
env: dict = None,
stdin: str | bytes = None,
) -> int:
"""
Chroot into rootfs and run programs
If you are running a cross build, you need install qemu-user-static-binfmt
"""
if not ctx.chroot:
raise RuntimeError("rootfs is not ready for chroot")
path = ctx.get_rootfs()
args = ["chroot", path]
args.extend(utils.parse_cmd_args(cmd))
return ctx.run_external(args, cwd, env, stdin)
def proc_mkdir(ctx: ArchBuilderContext, file: dict, path: str):
root = ctx.get_rootfs()
dir_uid, dir_gid, dir_mode = 0, 0, 0o0755
if "mkdir" in file:
if type(file["mkdir"]) is bool:
# mkdir = False: skip mkdir
if not file["mkdir"]: return
elif type(file["mkdir"]) is dict:
if "mode" in file: dir_mode = int(file["mode"])
dir_uid, dir_gid = user.parse_user_from(ctx, file)
# mkdir recursive
def mkdir_loop(folder: str):
# strip end slash
if folder.endswith("/"): folder = folder[0:-1]
if len(folder) == 0: return
# resolve to rootfs
real = os.path.join(root, folder)
if os.path.exists(real): return
# create parent folder first
mkdir_loop(os.path.dirname(folder))
log.debug(f"create folder {real} with {dir_mode:04o}")
os.mkdir(real, mode=dir_mode)
log.debug(f"chown folder {real} to {dir_uid}:{dir_gid}")
os.chown(real, uid=dir_uid, gid=dir_gid)
mkdir_loop(os.path.dirname(path))
def check_allowed(path: str, action: str):
"""
Check add / remove files is in allowed files
Why we cannot write into others folder?
1. Write to pacman managed folders (/usr, /opt, ...) WILL BREAK SYSTEM UPGRADE
2. Never add files to homes (/home/xxx, /root, ...),
when someone create new users, these configs will broken.
What if I want to write to other folders?
1. /usr/bin/ /usr/lib/ /opt/ ...: you should not add files here,
please make a package and install via pacman.
2. /home/xxx: add files into /etc/skel, they will copy when user create
3. /usr/lib/systemd/system: do not add service or override into here,
please use /etc/systemd/system (see Unit File Load Path in man:systemd.unit(5))
4. /run /tmp /dev: there are mount as virtual filesystem when booting,
you can use systemd-tmpfiles to create in these folders (/etc/tmpfiles.d)
Why these folder is writable
1. /etc/ is used for administrator configs
2. /boot/ is used for system boot up, you can put bootloaders configs into this folder
"""
if not path.startswith(("/etc/", "/boot/")):
raise ArchBuilderConfigError(f"{action} {path} is not allowed")
def add_file(ctx: ArchBuilderContext, file: dict):
# at least path content
if "path" not in file:
raise ArchBuilderConfigError("no path set in file")
if "content" not in file:
raise ArchBuilderConfigError("no content set in file")
root = ctx.get_rootfs()
path: str = file["path"]
if path.startswith("/"): path = path[1:]
uid, gid = user.parse_user_from(ctx, file)
# file encoding. default to UTF-8
encode = file["encode"] if "encode" in file else "utf-8"
# follow symbolic links
follow = file["follow"] if "follow" in file else True
# files mode
mode = int(file["mode"]) if "mode" in file else 0o0644
check_allowed(file["path"], "add files into")
# create parent folders
proc_mkdir(ctx, file, path)
# resolve to rootfs
real = os.path.join(root, path)
if not follow and os.path.exists(real): os.remove(real)
log.debug(f"create file {real}")
with open(real, "wb") as f:
content: str = file["content"]
log.debug(
"write to %s with %s",
(real, content.strip())
)
f.write(content.encode(encode))
log.debug(f"chmod file {real} to {mode:04o}")
os.chmod(real, mode=mode)
log.debug(f"chown file {real} to {uid}:{gid}")
os.chown(real, uid=uid, gid=gid)
log.info("adding file %s successful", file["path"])
def add_files_all(ctx: ArchBuilderContext, stage: str = None):
for file in ctx.get("filesystem.files", []):
cs = file["stage"] if "stage" in file else None
if cs != stage: continue
add_file(ctx, file)
def remove_all(ctx: ArchBuilderContext):
for file in ctx.get("filesystem.remove", []):
check_allowed(file, "remove files from")
shutil.rmtree(file)
def proc_filesystem(ctx: ArchBuilderContext):
add_files_all(ctx)
remove_all(ctx)

51
builder/build/fstab.py Normal file
View File

@ -0,0 +1,51 @@
import os
from logging import getLogger
from builder.lib.context import ArchBuilderContext
from builder.lib.utils import open_config
log = getLogger(__name__)
def write_fstab(ctx: ArchBuilderContext):
log.debug(
"generate fstab:\n\t%s",
ctx.fstab.to_mount_file("\n\t").strip()
)
path = os.path.join(ctx.get_rootfs(), "etc/fstab")
with open_config(path) as f:
ctx.fstab.write_mount_file(f)
def mount_all(ctx: ArchBuilderContext):
path = ctx.get_mount()
root = ctx.get_rootfs()
if not os.path.exists(path):
os.mkdir(path, mode=0o0755)
if ctx.fstab[0].target != "/":
raise RuntimeError("no root to mount")
for mnt in ctx.fstab:
m = mnt.clone()
if m.source == "none": continue
if m.source not in ctx.fsmap:
raise RuntimeError(f"source {m.source} cannot map to host")
m.source = ctx.fsmap[m.source]
if m.target == "/": in_mnt, in_root = path, root
elif m.target.startswith("/"):
folder = m.target[1:]
in_mnt = os.path.join(path, folder)
in_root = os.path.join(root, folder)
elif m.fstype == "swap" or m.target == "none": continue
else: raise RuntimeError(f"target {m.target} cannot map to host")
if in_mnt:
m.target = in_mnt
if not os.path.exists(in_mnt):
os.makedirs(in_mnt, mode=0o0755)
if in_root and not os.path.exists(in_root):
os.makedirs(in_root, mode=0o0755)
m.mount()
ctx.mounted.insert(0, m)
def proc_fstab(ctx: ArchBuilderContext):
ctx.fstab.resort()
write_fstab(ctx)
mount_all(ctx)

292
builder/build/grub.py Normal file
View File

@ -0,0 +1,292 @@
import os
import shutil
from logging import getLogger
from builder.lib.context import ArchBuilderContext
from builder.lib.config import ArchBuilderConfigError
from builder.lib.loop import loop_get_backing, loop_get_offset
from builder.lib.blkid import Blkid
from builder.lib.mount import MountPoint
log = getLogger(__name__)
blkid = Blkid()
modules = [
"part_msdos", "part_gpt", "part_apple", "ext2", "fat", "ntfs", "sleep",
"ufs1", "ufs2", "cpio", "sleep", "search", "search_fs_file", "minicmd",
"search_fs_uuid", "search_label", "reboot", "halt", "gzio", "serial",
"boot", "file", "f2fs", "iso9660", "hfs", "hfsplus", "zfs", "minix",
"memdisk", "sfs", "lvm", "http", "tftp", "udf", "xfs", "date", "echo",
"all_video", "btrfs", "disk", "configfile", "terminal",
]
def get_prop(
ctx: ArchBuilderContext,
name: str,
cfg: dict,
path: bool = False,
multi: bool = False,
) -> str | None:
value = ctx.get(f"kernel.{name}", None)
if name in cfg: value = cfg[name]
if value is None: return None
if type(value) is str:
value = [value]
if len(value) == 0: return None
if path:
for i in range(len(value)):
if not value[i].startswith("/"):
value[i] = "/" + value[i]
if multi: value = " ".join(value)
else: value = value[0]
return value
def fstype_to_mod(name: str) -> str:
match name:
case "ext3": return "ext2"
case "ext4": return "ext2"
case "vfat": return "fat"
case "fat12": return "fat"
case "fat16": return "fat"
case "fat32": return "fat"
case "msdos": return "fat"
case _: return name
def gen_menuentry(ctx: ArchBuilderContext, cfg: dict) -> str:
ret = ""
name = cfg["name"] if "name" in cfg else "Linux"
kernel = get_prop(ctx, "kernel", cfg, True)
initramfs = get_prop(ctx, "initramfs", cfg, True, True)
devicetree = get_prop(ctx, "devicetree", cfg, True, True)
cmdline = get_prop(ctx, "cmdline", cfg, False, True)
path = get_prop(ctx, "path", cfg, False, False)
if kernel is None: raise ArchBuilderConfigError("no kernel for grub")
if cmdline is None: cmdline = ""
ret += f"menuentry '{name}' {{\n"
if path:
fs = ctx.fstab.find_target(path)
if fs is None or len(fs) == 0 or fs[0] is None:
raise ArchBuilderConfigError(f"mountpoint {path} not found")
dev = fs[0].source
if dev in ctx.fsmap: dev = ctx.fsmap[dev]
uuid = blkid.get_tag_value(None, "UUID", dev)
if uuid is None: raise RuntimeError(f"cannot detect uuid for {path}")
ret += "\tinsmod %s\n" % fstype_to_mod(fs[0].fstype)
ret += f"\tsearch --no-floppy --fs-uuid --set=root {uuid}\n"
if devicetree:
ret += "\techo 'Loading Device Tree...'\n"
ret += f"\tdevicetree {devicetree}\n"
ret += "\techo 'Loading Kernel...'\n"
ret += f"\tlinux {kernel} {cmdline}\n"
if initramfs:
ret += "\techo 'Loading Initramfs...'\n"
ret += f"\tinitrd {initramfs}\n"
ret += "\techo 'Booting...'\n"
ret += f"}}\n"
return ret
def gen_basic(ctx: ArchBuilderContext) -> str:
ret = ""
ret += "insmod part_gpt\n"
ret += "insmod part_msdos\n"
ret += "insmod all_video\n"
ret += "terminal_input console\n"
ret += "terminal_output console\n"
ret += "if serial --unit=0 --speed=115200; then\n"
ret += "\tterminal_input --append console\n"
ret += "\tterminal_output --append console\n"
ret += "fi\n"
ret += "set timeout_style=menu\n"
timeout = ctx.get("bootloader.timeout", 5)
ret += f"set timeout={timeout}\n"
default = 0
items = ctx.get("bootloader.items", [])
for idx in range(len(items)):
item = items[idx]
if "default" in item and item["default"]:
default = idx
ret += f"set default={default}\n"
return ret
def mkconfig(ctx: ArchBuilderContext) -> str:
ret = ""
ret += gen_basic(ctx)
for item in ctx.get("bootloader.items", []):
ret += gen_menuentry(ctx, item)
return ret
def proc_targets(ctx: ArchBuilderContext, install: str):
copies = [".mod", ".lst"]
folder = os.path.join(ctx.get_rootfs(), "usr/lib/grub")
for target in ctx.get("grub.targets", []):
if "/" in target: raise ArchBuilderConfigError(f"bad target {target}")
base = os.path.join(folder, target)
if not os.path.exists(os.path.join(base, "linux.mod")):
raise ArchBuilderConfigError(f"target {target} not found")
dest = os.path.join(install, target)
os.makedirs(dest, mode=0o0755, exist_ok=True)
for file in os.listdir(base):
if not any((file.endswith(name) for name in copies)):
continue
shutil.copyfile(
os.path.join(base, file),
os.path.join(dest, file),
)
log.info(f"installed grub target {target}")
def proc_config(ctx: ArchBuilderContext, install: str):
content = mkconfig(ctx)
cfg = os.path.join(install, "grub.cfg")
with open(cfg, "w") as f:
f.write(content)
log.info(f"generated grub config {cfg}")
def efi_arch_name(target: str) -> str:
match target:
case "arm64-efi": return "aa64"
case "x86_64-efi": return "x64"
case "arm-efi": return "arm"
case "i386-efi": return "ia32"
case "riscv64-efi": return "riscv64"
case _: raise RuntimeError(
f"unsupported {target} for efi name"
)
def efi_boot_name(target: str) -> str:
name = efi_arch_name(target)
return f"boot{name}.efi"
def proc_mkimage_efi(ctx: ArchBuilderContext, target: str):
cmds = ["grub-mkimage"]
root = ctx.get_rootfs()
efi_folders = ["/boot", "/boot/efi", "/efi", "/esp"]
base = os.path.join(root, "usr/lib/grub", target)
install = ctx.get("grub.path", "/boot/grub")
if not target.endswith("-efi"):
raise RuntimeError("mkimage efi only for *-efi")
esp: MountPoint | None = None
grub: MountPoint | None = None
fdir = install + "/"
for mnt in ctx.fstab:
if fstype_to_mod(mnt.fstype) == "fat":
if mnt.target in efi_folders:
esp = mnt
tdir = mnt.target
if not tdir.endswith("/"): tdir += "/"
if fdir.startswith(tdir):
if (not grub) or mnt.level >= grub.level:
grub = mnt
if esp is None: raise RuntimeError("efi partiton not found")
if grub is None: raise RuntimeError("grub install folder not found")
esp_dest = esp.target
if esp_dest.startswith("/"): esp_dest = esp_dest[1:]
if not install.startswith("/"): install = "/" + install
if not install.startswith(grub.target):
raise RuntimeError("grub install prefix not found")
prefix = install[len(grub.target):]
if not prefix.startswith("/"): prefix = "/" + prefix
device = (ctx.fsmap[grub.source] if grub.source in ctx.fsmap else grub.source)
uuid = blkid.get_tag_value(None, "UUID", device)
if not uuid: raise RuntimeError(
"failed to detect uuid for grub install path"
)
efi_folder = os.path.join(root, esp_dest)
grub_folder = os.path.join(root, install[1:])
cmds.append(f"--format={target}")
cmds.append(f"--directory={base}")
cmds.append(f"--prefix={prefix}")
cmds.append("--compression=xz")
builtin = os.path.join(grub_folder, "grub.builtin.cfg")
with open(builtin, "w") as f:
f.write(f"search --no-floppy --fs-uuid --set=root {uuid}\n")
f.write(f"set prefix=\"($root){prefix}\"\n")
f.write("normal\n")
f.write("echo \"Failed to switch into normal mode\"\n")
f.write("sleep 5\n")
cmds.append(f"--config={builtin}")
efi = os.path.join(efi_folder, "efi/boot")
os.makedirs(efi, mode=0o0755, exist_ok=True)
out = os.path.join(efi, efi_boot_name(target))
cmds.append(f"--output={out}")
if os.path.exists(out): os.remove(out)
cmds.extend(modules)
ret = ctx.run_external(cmds)
if ret != 0: raise OSError("grub-mkimage failed")
log.info(f"generated grub {target} efi image {out}")
def proc_bootsec(ctx: ArchBuilderContext, target: str):
mods = []
cmds = ["grub-install"]
if target != "i386-pc":
raise RuntimeError("bootsec only for i386-pc")
mount = ctx.get_mount()
root = ctx.get_rootfs()
install: str = ctx.get("grub.path", "/boot/grub")
if install.startswith("/"): install = install[1:]
grub = os.path.join(root, "usr/lib/grub", target)
if install.endswith("/grub"): install = install[0:-5]
cmds.append(f"--target={target}")
cmds.append(f"--directory={grub}")
mods.append("part_msdos")
mods.append("part_gpt")
rootfs = ctx.fstab.find_target("/")
mnt_install = os.path.join(mount, install)
cmds.append(f"--boot-directory={mnt_install}")
if rootfs is None or len(rootfs) <= 0 or rootfs[0] is None:
raise RuntimeError("rootfs mount point not found")
rootfs = rootfs[0]
mods.append(fstype_to_mod(rootfs.fstype))
if len(mods) > 0:
cmds.append("--modules=" + (" ".join(mods)))
device = ctx.get("grub.device", None)
if device is None:
source = rootfs.source
if source in ctx.fsmap:
source = ctx.fsmap[source]
if not source.startswith("/dev/loop"):
raise RuntimeError("no device to detect grub install")
if loop_get_offset(source) <= 0:
raise RuntimeError("no loop part to detect grub install")
device = loop_get_backing(source)
if device is None:
raise RuntimeError("no device for grub install")
cmds.append(device)
ret = ctx.run_external(cmds)
if ret != 0: raise OSError("grub-install failed")
src = os.path.join(mnt_install, "grub")
dst = os.path.join(root, install, "grub")
shutil.copytree(src, dst, dirs_exist_ok=True)
def proc_install(ctx: ArchBuilderContext):
targets: list[str] = ctx.get("grub.targets", [])
for target in targets:
if target == "i386-pc":
proc_bootsec(ctx, target)
elif target.endswith("-efi"):
proc_mkimage_efi(ctx, target)
else: raise ArchBuilderConfigError(
f"unsupported target {target}"
)
def proc_grub(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
install: str = ctx.get("grub.path", "/boot/grub")
if install.startswith("/"):
install = install[1:]
install = os.path.join(root, install)
os.makedirs(install, mode=0o0755, exist_ok=True)
proc_config(ctx, install)
proc_targets(ctx, install)
proc_install(ctx)

64
builder/build/locale.py Normal file
View File

@ -0,0 +1,64 @@
import os
from logging import getLogger
from builder.build import filesystem
from builder.lib.context import ArchBuilderContext
from builder.lib.config import ArchBuilderConfigError
from builder.lib.utils import open_config
log = getLogger(__name__)
def reset_locale(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
archive = os.path.join(root, "usr/lib/locale/locale-archive")
if os.path.exists(archive): os.remove(archive)
def enable_all(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
locales = ctx.get("locale.enable", [])
log.info("setup enabled locale")
file = os.path.join(root, "etc/locale.gen")
with open_config(file) as f:
for line in locales:
log.debug(f"adding locale {line}")
f.write(line)
f.write("\n")
if len(locales) == 0:
f.write("# No any locales enabled\n")
filesystem.chroot_run(ctx, "locale-gen")
def set_default(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
default = ctx.get("locale.default", None)
if default is None: default = "C"
log.info(f"default locale: {default}")
conf = os.path.join(root, "etc/locale.conf")
with open_config(conf) as f:
f.write(f"LANG={default}\n")
def set_timezone(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
timezone = ctx.get("timezone", None)
if timezone is None: timezone = "UTC"
log.info(f"timezone: {timezone}")
dst = os.path.join("/usr/share/zoneinfo", timezone)
real = os.path.join(root, dst[1:])
if not os.path.exists(real): raise ArchBuilderConfigError(
f"timezone {timezone} not found"
)
lnk = os.path.join(root, "etc/localtime")
if os.path.exists(lnk): os.remove(lnk)
os.symlink(dst, lnk)
conf = os.path.join(root, "etc/timezone")
with open(conf, "w") as f:
f.write(timezone)
f.write(os.linesep)
def proc_locale(ctx: ArchBuilderContext):
reset_locale(ctx)
enable_all(ctx)
set_default(ctx)
set_timezone(ctx)

View File

@ -0,0 +1,73 @@
import os
from logging import getLogger
from tempfile import NamedTemporaryFile
from builder.build.filesystem import chroot_run
from builder.lib.context import ArchBuilderContext
from builder.lib.config import ArchBuilderConfigError
from builder.lib.utils import open_config
log = getLogger(__name__)
def add_values(ctx: ArchBuilderContext, key: str, arr: list[str]):
vals = ctx.get(key, [])
vt = type(vals)
if vt is list: arr.extend(vals)
elif vt is str: arr.extend(vals.split())
else: raise ArchBuilderConfigError(f"bad values for {key}")
def gen_config(ctx: ArchBuilderContext):
modules: list[str] = []
binaries: list[str] = []
files: list[str] = []
hooks: list[str] = []
hooks.append("base")
hooks.append("systemd")
hooks.append("autodetect")
if ctx.cur_arch in ["x86_64", "i386"]:
hooks.append("microcode")
hooks.append("modconf")
if ctx.get("mkinitcpio.hooks.keymap", False):
hooks.extend(["kms", "keyboard", "keymap", "consolefont"])
hooks.extend(["block", "filesystems", "fsck"])
add_values(ctx, "mkinitcpio.modules", modules)
add_values(ctx, "mkinitcpio.binaries", binaries)
add_values(ctx, "mkinitcpio.files", files)
root = ctx.get_rootfs()
cfg = os.path.join(root, "etc/mkinitcpio.conf")
with open_config(cfg) as f:
f.write("MODULES=(%s)\n" % (" ".join(modules)))
f.write("BINARIES=(%s)\n" % (" ".join(binaries)))
f.write("FILES=(%s)\n" % (" ".join(files)))
f.write("HOOKS=(%s)\n" % (" ".join(hooks)))
def recreate_initrd(ctx: ArchBuilderContext, path: str):
chroot_run(ctx, ["mkinitcpio", "-p", path])
def recreate_initrd_no_autodetect(ctx: ArchBuilderContext, path: str):
tmp = os.path.join(ctx.get_rootfs(), "tmp")
with NamedTemporaryFile("w", dir=tmp) as temp:
with open(path, "r") as f:
temp.write(f.read())
temp.write("\ndefault_options=\"-S autodetect\"\n")
temp.flush()
path = os.path.join("/tmp", os.path.basename(temp.name))
recreate_initrd(ctx, path)
def recreate_initrds(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
no_autodetect = ctx.get("mkinitcpio.no_autodetect", True)
folder = os.path.join(root, "etc/mkinitcpio.d")
for preset in os.listdir(folder):
if not preset.endswith(".preset"): continue
path = os.path.join(folder, preset)
if not no_autodetect: recreate_initrd(ctx, path)
else: recreate_initrd_no_autodetect(ctx, path)
def proc_mkinitcpio(ctx: ArchBuilderContext):
gen_config(ctx)
recreate_initrds(ctx)

103
builder/build/mount.py Normal file
View File

@ -0,0 +1,103 @@
import os
from logging import getLogger
from builder.lib.context import ArchBuilderContext
from builder.lib.mount import MountTab, MountPoint
log = getLogger(__name__)
def init_rootfs(ctx: ArchBuilderContext):
"""
Initialize Arch Linux rootfs
"""
path = ctx.get_rootfs()
def mkdir(mode, *names):
real = os.path.join(path, *names)
if not os.path.exists(real):
log.debug(f"create folder {real} with {mode:04o}")
os.makedirs(real, mode=mode)
log.debug(f"initializing rootfs folders at {path}")
mkdir(0o0755, path)
mkdir(0o0755, "dev")
mkdir(0o0555, "proc")
mkdir(0o0755, "run")
mkdir(0o0555, "sys")
mkdir(0o0755, "var", "lib", "pacman")
mkdir(0o0755, "etc", "pacman.d")
log.info(f"initialized rootfs folders at {path}")
def undo_mounts(ctx: ArchBuilderContext):
"""
Clean up mount points
"""
if len(ctx.mounted) <= 0: return
log.debug("undo mount points")
ctx.chroot = False
while len(ctx.mounted) > 0:
unmounted = 0
for mount in ctx.mounted.copy():
try:
mount.umount()
ctx.mounted.remove(mount)
unmounted += 1
except:
pass
if unmounted == 0:
raise RuntimeError("failed to umount all")
mnts = MountTab.parse_mounts()
if any(mnts.find_folder(ctx.work)):
raise RuntimeError("mount points not cleanup")
def do_mount(
ctx: ArchBuilderContext,
source: str,
target: str,
fstype: str,
options: str
):
"""
Add a mount point
"""
mnt = MountPoint()
mnt.source = source
mnt.target = target
mnt.fstype = fstype
mnt.options = options
mnt.mount()
ctx.mounted.insert(0, mnt)
def init_mount(ctx: ArchBuilderContext):
"""
Setup mount points for rootfs
"""
root = ctx.get_rootfs()
def symlink(target, *names):
real = os.path.join(root, *names)
if not os.path.exists(real):
log.debug(f"create symlink {real} -> {target}")
os.symlink(target, real)
def root_mount(source, target, fstype, options):
real = os.path.realpath(os.path.join(root, target))
do_mount(ctx, source, real, fstype, options)
try:
mnts = MountTab.parse_mounts()
if any(mnts.find_folder(ctx.work)):
raise RuntimeError("mount points not cleanup")
root_mount("proc", "proc", "proc", "nosuid,noexec,nodev")
root_mount("sys", "sys", "sysfs", "nosuid,noexec,nodev,ro")
root_mount("dev", "dev", "devtmpfs", "mode=0755,nosuid")
root_mount("pts", "dev/pts", "devpts", "mode=0620,gid=5,nosuid,noexec")
root_mount("shm", "dev/shm", "tmpfs", "mode=1777,nosuid,nodev")
root_mount("run", "run", "tmpfs", "nosuid,nodev,mode=0755")
root_mount("tmp", "tmp", "tmpfs", "mode=1777,strictatime,nodev,nosuid")
symlink("/proc/self/fd", "dev", "fd")
symlink("/proc/self/fd/0", "dev", "stdin")
symlink("/proc/self/fd/1", "dev", "stdout")
symlink("/proc/self/fd/2", "dev", "stderr")
ctx.chroot = True
except:
log.error("failed to initialize mount points")
undo_mounts(ctx)
raise

68
builder/build/names.py Normal file
View File

@ -0,0 +1,68 @@
import os
from logging import getLogger
from builder.lib.context import ArchBuilderContext
from builder.lib.config import ArchBuilderConfigError
from builder.lib.utils import open_config
log = getLogger(__name__)
def gen_machine_info(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
file = os.path.join(root, "etc/machine-info")
cfg = ctx.get("sysconf")
fields = [
"chassis", "location", "icon_name",
"deployment", "pretty_hostname"
]
with open_config(file) as f:
for field in fields:
if field not in cfg: continue
f.write("%s=\"%s\"\n" % (field.upper(), cfg[field]))
log.info(f"generated machine-info {file}")
def gen_hosts(ctx: ArchBuilderContext):
addrs: list[str] = []
root = ctx.get_rootfs()
file = os.path.join(root, "etc/hosts")
hosts: list[str] = ctx.get("sysconf.hosts", [])
with open_config(file) as f:
for addr in hosts:
s = addr.split()
if len(s) <= 1: raise ArchBuilderConfigError("bad host entry")
addrs.append(s[0])
f.write(addr)
f.write(os.linesep)
name = ctx.get("sysconf.hostname")
if "127.0.1.1" not in addrs and name:
f.write(f"127.0.1.1 {name}\n")
log.info(f"generated hosts {file}")
def gen_hostname(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
file = os.path.join(root, "etc/hostname")
name = ctx.get("sysconf.hostname")
if name is None: return
with open_config(file) as f:
f.write(name)
f.write(os.linesep)
log.info(f"generated hostname {file}")
def gen_environments(ctx: ArchBuilderContext):
root = ctx.get_rootfs()
file = os.path.join(root, "etc/environment")
envs: dict[str] = ctx.get("sysconf.environments", [])
with open_config(file) as f:
for key in envs:
val = envs[key]
f.write(f"{key}=\"{val}\"\n")
log.info(f"generated environment {file}")
def proc_names(ctx: ArchBuilderContext):
gen_machine_info(ctx)
gen_environments(ctx)
gen_hostname(ctx)
gen_hosts(ctx)

69
builder/build/pacman.py Normal file
View File

@ -0,0 +1,69 @@
import os
from logging import getLogger
from builder.component.pacman import Pacman
from builder.lib.context import ArchBuilderContext
from builder.lib.utils import open_config
log = getLogger(__name__)
def install_all(ctx: ArchBuilderContext, pacman: Pacman):
packages = ctx.get("pacman.install", [])
if len(packages) <= 0: return
log.info("installing packages: %s", " ".join(packages))
pacman.install(packages)
def install_all_keyring(ctx: ArchBuilderContext, pacman: Pacman):
packages: list[str] = ctx.get("pacman.install", [])
if len(packages) <= 0: return
keyrings = [pkg for pkg in packages if pkg.endswith("-keyring")]
if len(keyrings) <= 0: return
log.info("installing keyrings: %s", " ".join(keyrings))
pacman.add_trust_keyring_pkg(keyrings)
def uninstall_all(ctx: ArchBuilderContext, pacman: Pacman):
packages = ctx.get("pacman.uninstall", [])
if len(packages) <= 0: return
log.info("uninstalling packages: %s", " ".join(packages))
pacman.uninstall(packages)
def append_config(ctx: ArchBuilderContext, lines: list[str]):
lines.append("[options]\n")
lines.append("HoldPkg = pacman glibc filesystem\n")
lines.append(f"Architecture = {ctx.tgt_arch}\n")
lines.append("UseSyslog\n")
lines.append("Color\n")
lines.append("CheckSpace\n")
lines.append("VerbosePkgLists\n")
lines.append("ParallelDownloads = 5\n")
lines.append("SigLevel = Required DatabaseOptional\n")
lines.append("LocalFileSigLevel = Optional\n")
def gen_config(ctx: ArchBuilderContext, pacman: Pacman):
conf = os.path.join(ctx.get_rootfs(), "etc/pacman.conf")
lines: list[str] = []
append_config(ctx, lines)
pacman.append_repos(lines)
with open_config(conf) as f:
f.writelines(lines)
log.info(f"generated pacman config {conf}")
def proc_pacman(ctx: ArchBuilderContext, pacman: Pacman):
install_all(ctx, pacman)
uninstall_all(ctx, pacman)
gen_config(ctx, pacman)
def proc_pacman_keyring(ctx: ArchBuilderContext, pacman: Pacman):
install_all_keyring(ctx, pacman)
def trust_all(ctx: ArchBuilderContext, pacman: Pacman):
if not ctx.gpgcheck: return
trust = ctx.get("pacman.trust", [])
pacman.recv_keys(trust)
for key in trust: pacman.lsign_key(key)

22
builder/build/systemd.py Normal file
View File

@ -0,0 +1,22 @@
import os
from logging import getLogger
from builder.lib.context import ArchBuilderContext
from builder.component import systemd as systemd_comp
log = getLogger(__name__)
def proc_systemd(ctx: ArchBuilderContext):
systemd_comp.enable(ctx, ctx.get("systemd.enable", []))
systemd_comp.disable(ctx, ctx.get("systemd.disable", []))
systemd_comp.set_default(ctx, ctx.get("systemd.default", None))
def proc_machine_id(ctx: ArchBuilderContext):
id = ctx.get("machine-id", "")
root = ctx.get_rootfs()
mid = os.path.join(root, "etc/machine-id")
with open(mid, "w") as f:
f.write(id)
f.write(os.linesep)
if len(id) == 0: log.info("removed machine-id")
else: log.info(f"set machine-id to {id}")

66
builder/build/user.py Normal file
View File

@ -0,0 +1,66 @@
from logging import getLogger
from builder.build.filesystem import chroot_run
from builder.lib.config import ArchBuilderConfigError
from builder.lib.context import ArchBuilderContext
log = getLogger(__name__)
def proc_user(ctx: ArchBuilderContext, cfg: dict):
if "name" not in cfg: raise ArchBuilderConfigError("username not set")
name = cfg["name"]
cmds = []
if ctx.passwd.lookup_name(name) is None:
cmds.append("useradd")
cmds.append("-m")
action = "created"
else:
cmds.append("usermod")
action = "modified"
if "uid" in cfg: cmds.extend(["-u", str(cfg["uid"])])
if "gid" in cfg: cmds.extend(["-g", str(cfg["gid"])])
if "home" in cfg: cmds.extend(["-d", cfg["home"]])
if "shell" in cfg: cmds.extend(["-s", cfg["shell"]])
if "groups" in cfg: cmds.extend(["-G", str(cfg["groups"])])
cmds.append(name)
ret = chroot_run(ctx, cmds)
if ret != 0: raise OSError(f"{cmds[0]} failed")
if "password" in cfg:
cmds = ["chpasswd"]
text = f"{name}:{cfg['password']}\n"
ret = chroot_run(ctx, cmds, stdin=text)
if ret != 0: raise OSError("chpasswd failed")
ctx.reload_passwd()
log.info(f"{action} user {name}")
def proc_group(ctx: ArchBuilderContext, cfg: dict):
if "name" not in cfg: raise ArchBuilderConfigError("groupname not set")
name = cfg["name"]
cmds = []
if ctx.passwd.lookup_name(name) is None:
cmds.append("groupadd")
action = "created"
else:
cmds.append("groupmod")
action = "modified"
if "gid" in cfg: cmds.extend(["-g", str(cfg["gid"])])
cmds.append(name)
ret = chroot_run(ctx, cmds)
if ret != 0: raise OSError(f"{name} failed")
ctx.reload_passwd()
log.info(f"{action} group {name}")
def proc_users(ctx: ArchBuilderContext):
for user in ctx.get("sysconf.user", []):
proc_user(ctx, user)
def proc_groups(ctx: ArchBuilderContext):
for group in ctx.get("sysconf.group", []):
proc_group(ctx, group)
def proc_usergroup(ctx: ArchBuilderContext):
proc_groups(ctx)
proc_users(ctx)

323
builder/component/pacman.py Normal file
View File

@ -0,0 +1,323 @@
import os
import pyalpm
import logging
import shutil
import libarchive
from logging import getLogger
from builder.lib.context import ArchBuilderContext
from builder.lib.config import ArchBuilderConfigError
log = getLogger(__name__)
def log_cb(level, line):
if level & pyalpm.LOG_ERROR:
ll = logging.ERROR
elif level & pyalpm.LOG_WARNING:
ll = logging.WARNING
else: return
log.log(ll, line.strip())
def dl_cb(filename, ev, data):
match ev:
case 0: log.debug(f"pacman downloading {filename}")
case 2: log.warning(f"pacman retry download {filename}")
case 3: log.info(f"pacman downloaded {filename}")
def progress_cb(target, percent, n, i):
if len(target) <= 0 or percent != 0: return
log.info(f"processing {target} ({i}/{n})")
class Pacman:
handle: pyalpm.Handle
ctx: ArchBuilderContext
root: str
databases: dict[str: pyalpm.DB]
config: dict
caches: list[str]
def append_repos(self, lines: list[str]):
for repo in self.databases:
db = self.databases[repo]
lines.append(f"[{repo}]\n")
for server in db.servers:
log.debug(f"server {server}")
lines.append(f"Server = {server}\n")
def append_config(self, lines: list[str]):
siglevel = ("Required DatabaseOptional" if self.ctx.gpgcheck else "Never")
lines.append("[options]\n")
for cache in self.caches:
lines.append(f"CacheDir = {cache}\n")
lines.append(f"RootDir = {self.root}\n")
lines.append(f"GPGDir = {self.handle.gpgdir}\n")
lines.append(f"LogFile = {self.handle.logfile}\n")
lines.append("HoldPkg = pacman glibc\n")
lines.append(f"Architecture = {self.ctx.tgt_arch}\n")
lines.append("UseSyslog\n")
lines.append("Color\n")
lines.append("CheckSpace\n")
lines.append("VerbosePkgLists\n")
lines.append("ParallelDownloads = 5\n")
lines.append(f"SigLevel = {siglevel}\n")
lines.append("LocalFileSigLevel = Optional\n")
self.append_repos(lines)
def init_keyring(self):
path = os.path.join(self.ctx.work, "rootfs")
keyring = os.path.join(path, "etc/pacman.d/gnupg")
if not self.ctx.gpgcheck: return
if os.path.exists(os.path.join(keyring, "trustdb.gpg")):
log.debug("skip initialize pacman keyring when exists")
return
log.info("initializing pacman keyring")
self.pacman_key(["--init"])
def init_config(self):
config = os.path.join(self.ctx.work, "pacman.conf")
if os.path.exists(config):
os.remove(config)
log.info(f"generate pacman config {config}")
lines = []
self.append_config(lines)
log.debug("config content: %s", "\t".join(lines).strip())
log.debug(f"writing {config}")
with open(config, "w") as f:
f.writelines(lines)
def pacman_key(self, args: list[str]):
if not self.ctx.gpgcheck:
raise RuntimeError("GPG check disabled")
keyring = os.path.join(self.root, "etc/pacman.d/gnupg")
config = os.path.join(self.ctx.work, "pacman.conf")
cmds = ["pacman-key"]
cmds.append(f"--gpgdir={keyring}")
cmds.append(f"--config={config}")
cmds.extend(args)
ret = self.ctx.run_external(cmds)
if ret != 0: raise OSError(f"pacman-key failed with {ret}")
def pacman(self, args: list[str]):
config = os.path.join(self.ctx.work, "pacman.conf")
cmds = ["pacman"]
cmds.append("--noconfirm")
cmds.append(f"--root={self.root}")
cmds.append(f"--config={config}")
cmds.extend(args)
ret = self.ctx.run_external(cmds)
if ret != 0: raise OSError(f"pacman failed with {ret}")
def add_database(self, repo: dict):
def resolve(url: str) -> str:
return (url
.replace("$arch", self.ctx.tgt_arch)
.replace("$repo", name))
if "name" not in repo:
raise ArchBuilderConfigError("repo name not set")
name = repo["name"]
if name == "local" or "/" in name:
raise ArchBuilderConfigError("bad repo name")
if name not in self.databases:
self.databases[name] = self.handle.register_syncdb(
name, pyalpm.SIG_DATABASE_MARGINAL_OK
)
db = self.databases[name]
servers: list[str] = []
if "server" in repo:
servers.append(resolve(repo["server"]))
if "servers" in repo:
for server in repo["servers"]:
servers.append(resolve(server))
db.servers = servers
log.info(f"updating database {name}")
db.update(False)
def load_databases(self):
cfg = self.config
if "repo" not in cfg:
raise ArchBuilderConfigError("no repos found in config")
for repo in cfg["repo"]:
self.add_database(repo)
self.init_config()
self.refresh()
def lookup_package(self, name: str) -> list[pyalpm.Package]:
if ".pkg.tar." in name:
pkg = self.handle.load_pkg(name)
if pkg is None: raise RuntimeError(f"load package {name} failed")
return [pkg]
s = name.split("/")
if len(s) == 2:
if s[0] not in self.databases:
raise ValueError(f"database {s[0]} not found")
db = (self.handle.get_localdb() if s[0] == "local" else self.databases[s[0]])
pkg = db.get_pkg(s[1])
if pkg: return [pkg]
raise ValueError(f"package {s[1]} not found")
elif len(s) == 1:
pkg = pyalpm.find_grp_pkgs(self.databases.values(), name)
if len(pkg) > 0: return pkg
for dbn in self.databases:
db = self.databases[dbn]
pkg = db.get_pkg(name)
if pkg: return [pkg]
raise ValueError(f"package {name} not found")
raise ValueError(f"bad package name {name}")
def init_cache(self):
host_cache = "/var/cache/pacman/pkg"
work_cache = os.path.join(self.ctx.work, "packages")
root_cache = os.path.join(self.root, "var/cache/pacman/pkg")
self.caches.clear()
if os.path.exists(host_cache):
self.caches.append(host_cache)
self.caches.append(work_cache)
self.caches.append(root_cache)
os.makedirs(work_cache, mode=0o0755, exist_ok=True)
os.makedirs(root_cache, mode=0o0755, exist_ok=True)
def __init__(self, ctx: ArchBuilderContext):
self.ctx = ctx
if "pacman" not in ctx.config:
raise ArchBuilderConfigError("no pacman found in config")
self.config = ctx.config["pacman"]
self.root = ctx.get_rootfs()
db = os.path.join(self.root, "var/lib/pacman")
self.handle = pyalpm.Handle(self.root, db)
self.handle.arch = ctx.tgt_arch
self.handle.logfile = os.path.join(self.ctx.work, "pacman.log")
self.handle.gpgdir = os.path.join(self.root, "etc/pacman.d/gnupg")
self.handle.logcb = log_cb
self.handle.dlcb = dl_cb
self.handle.progresscb = progress_cb
self.databases = {}
self.caches = []
self.init_cache()
for cache in self.caches:
self.handle.add_cachedir(cache)
self.init_config()
def uninstall(self, pkgs: list[str]):
if len(pkgs) == 0: return
ps = " ".join(pkgs)
log.info(f"removing packages {ps}")
args = ["--needed", "--remove"]
args.extend(pkgs)
self.pacman(args)
def install(
self,
pkgs: list[str],
/,
force: bool = False,
asdeps: bool = False,
nodeps: bool = False,
):
if len(pkgs) == 0: return
core_db = "var/lib/pacman/sync/core.db"
if not os.path.exists(os.path.join(self.root, core_db)):
self.refresh()
ps = " ".join(pkgs)
log.info(f"installing packages {ps}")
args = ["--sync"]
if not force: args.append("--needed")
if asdeps: args.append("--asdeps")
if nodeps: args.extend(["--nodeps", "--nodeps"])
args.extend(pkgs)
self.pacman(args)
def download(self, pkgs: list[str]):
if len(pkgs) == 0: return
core_db = "var/lib/pacman/sync/core.db"
if not os.path.exists(os.path.join(self.root, core_db)):
self.refresh()
log.info("downloading packages %s", " ".join(pkgs))
args = ["--sync", "--downloadonly", "--nodeps", "--nodeps"]
args.extend(pkgs)
self.pacman(args)
def install_local(self, files: list[str]):
if len(files) == 0: return
log.info("installing local packages %s", " ".join(files))
args = ["--needed", "--upgrade"]
args.extend(files)
self.pacman(args)
def refresh(self, /, force: bool = False):
log.info("refresh pacman database")
args = ["--sync", "--refresh"]
if force: args.append("--refresh")
self.pacman(args)
def recv_keys(self, keys: str | list[str]):
args = ["--recv-keys"]
if type(keys) is str:
args.append(keys)
elif type(keys) is list:
if len(keys) <= 0: return
args.extend(keys)
else: raise TypeError("bad keys type")
self.pacman_key(args)
def lsign_key(self, key: str):
self.pacman_key(["--lsign-key", key])
def pouplate_keys(
self,
names: str | list[str] = None,
folder: str = None
):
args = ["--populate"]
if folder: args.extend(["--populate-from", folder])
if names is None: pass
elif type(names) is str: args.append(names)
elif type(names) is list: args.extend(names)
else: raise TypeError("bad names type")
self.pacman_key(args)
def find_package_file(self, pkg: pyalpm.Package) -> str | None:
for cache in self.caches:
p = os.path.join(cache, pkg.filename)
if os.path.exists(p): return p
return None
def trust_keyring_pkg(self, pkg: pyalpm.Package):
if not self.ctx.gpgcheck: return
names: list[str] = []
target = os.path.join(self.ctx.work, "keyrings")
keyring = "usr/share/pacman/keyrings/"
path = self.find_package_file(pkg)
if os.path.exists(target):
shutil.rmtree(target)
os.makedirs(target, mode=0o0755)
if path is None: raise RuntimeError(
f"package {pkg.name} not found"
)
log.debug(f"processing keyring package {pkg.name}")
with libarchive.file_reader(path) as archive:
for file in archive:
pn: str = file.pathname
if not pn.startswith(keyring): continue
fn = pn[len(keyring):]
if len(fn) <= 0: continue
if fn.endswith(".gpg"): names.append(fn[:-4])
dest = os.path.join(target, fn)
log.debug(f"extracting {pn} to {dest}")
with open(dest, "wb") as f:
for block in file.get_blocks(file.size):
f.write(block)
fd = f.fileno()
os.fchmod(fd, file.mode)
os.fchown(fd, file.uid, file.gid)
self.pouplate_keys(names, target)
def add_trust_keyring_pkg(self, pkgnames: list[str]):
if not self.ctx.gpgcheck: return
if len(pkgnames) <= 0: return
self.download(pkgnames)
for pkgname in pkgnames:
pkgs = self.lookup_package(pkgname)
for pkg in pkgs:
self.trust_keyring_pkg(pkg)

View File

@ -0,0 +1,38 @@
from builder.lib import utils
from builder.build import filesystem
from builder.lib.context import ArchBuilderContext
def systemctl(ctx: ArchBuilderContext, args: list[str]):
path = ctx.get_rootfs()
full_args = ["systemctl"]
if utils.have_external("systemctl"):
full_args.append(f"--root={path}")
full_args.extend(args)
ret = ctx.run_external(full_args)
else:
full_args.extend(args)
ret = filesystem.chroot_run(ctx, full_args)
if ret != 0: raise OSError(
"systemctl %s failed: %d",
" ".join(args), ret
)
def enable(ctx: ArchBuilderContext, units: list[str]):
if len(units) <= 0: return
args = ["enable", "--"]
args.extend(units)
systemctl(ctx, args)
def disable(ctx: ArchBuilderContext, units: list[str]):
if len(units) <= 0: return
args = ["disable", "--"]
args.extend(units)
systemctl(ctx, args)
def set_default(ctx: ArchBuilderContext, unit: str):
if not unit: return
systemctl(ctx, ["set-default", "--", unit])

74
builder/component/user.py Normal file
View File

@ -0,0 +1,74 @@
from logging import getLogger
from builder.lib.config import ArchBuilderConfigError
from builder.lib.context import ArchBuilderContext
log = getLogger(__name__)
def parse_usergroup_item(
ctx: ArchBuilderContext,
item: str | int,
group: bool = False
) -> int:
if type(item) is int:
return int(item)
elif type(item) is str:
if group:
user = ctx.passwd.lookup_name(item)
if user is None: raise ArchBuilderConfigError(
f"user {item} not found"
)
return user.gid
else:
grp = ctx.group.lookup_name(item)
if grp is None: raise ArchBuilderConfigError(
f"group {item} not found"
)
return grp.gid
else: raise ArchBuilderConfigError("bad owner type")
def parse_owner(ctx: ArchBuilderContext, owner: str) -> tuple[int, int]:
if ":" in owner:
i = owner.find(":")
uid = parse_usergroup_item(ctx, owner[0:i], False)
gid = parse_usergroup_item(ctx, owner[i+1:], True)
else:
uid = parse_usergroup_item(ctx, owner, False)
user = ctx.passwd.lookup_uid(uid)
if user is None: raise ArchBuilderConfigError(
f"user {user} not found"
)
gid = user.gid
return uid, gid
def parse_usergroup_from(
ctx: ArchBuilderContext,
node: dict,
group: bool = False,
default: int = 0
) -> int:
kid = "uid" if not group else "gid"
kname = "owner" if not group else "group"
if kid in node: return int(node[kid])
if kname in node: return parse_usergroup_item(
ctx, node[kname], group
)
return default
def parse_user_from(
ctx: ArchBuilderContext,
node: dict,
default: tuple[int, int] = (0, -1)
) -> tuple[int, int]:
if "owner" in node: return parse_owner(ctx, node["owner"])
uid = parse_usergroup_from(ctx, node, False, default[0])
gid = parse_usergroup_from(ctx, node, True, default[1])
if gid == -1:
user = ctx.passwd.lookup_uid(uid)
if user is None: raise ArchBuilderConfigError(
f"user {user} not found"
)
gid = user.gid
return uid, gid

27
builder/disk/content.py Normal file
View File

@ -0,0 +1,27 @@
from builder.disk.image import ImageBuilder
class ImageContentBuilder:
builder: ImageBuilder
properties: dict
def __init__(self, builder: ImageBuilder):
self.builder = builder
self.properties = {}
def build(self): pass
class ImageContentBuilders:
types: list[tuple[str, type[ImageContentBuilder]]] = []
@staticmethod
def init():
if len(ImageContentBuilders.types) > 0: return
from builder.disk.types import types
ImageContentBuilders.types.extend(types)
@staticmethod
def find_builder(name: str) -> type[ImageContentBuilder]:
types = ImageContentBuilders.types
return next((t[1] for t in types if name == t[0]), None)

View File

@ -0,0 +1,11 @@
from builder.disk.filesystem.creator import FileSystemCreator
class BtrfsCreator(FileSystemCreator):
def create(self):
cmds: list[str] = ["mkfs.btrfs"]
if "fsname" in self.config: cmds.extend(["-L", self.config["fsname"]])
if "fsuuid" in self.config: cmds.extend(["-U", self.config["fsuuid"]])
cmds.append(self.device)
ret = self.ctx.run_external(cmds)
if ret != 0: raise OSError("mkfs.btrfs failed")

View File

@ -0,0 +1,130 @@
import os
from logging import getLogger
from builder.lib.blkid import Blkid
from builder.disk.layout.gpt.types import DiskTypesGPT
from builder.disk.content import ImageContentBuilder
from builder.lib.config import ArchBuilderConfigError
from builder.lib.mount import MountPoint
from builder.lib.utils import path_to_name
log = getLogger(__name__)
class FileSystemBuilder(ImageContentBuilder):
blkid: Blkid = Blkid()
fstype_map: dict = {
"fat12": "vfat",
"fat16": "vfat",
"fat32": "vfat",
}
def proc_cmdline_root(self, cfg: dict, mnt: MountPoint):
ccfg = self.builder.ctx.config_orig
if "kernel" not in ccfg: ccfg["kernel"] = {}
kern = ccfg["kernel"]
if "cmdline" not in kern: kern["cmdline"] = []
cmds: list[str] = kern["cmdline"]
if any(cmdline.startswith("root=") for cmdline in cmds):
raise ArchBuilderConfigError("root already set in cmdline")
if mnt.target != "/":
log.warning(f"root target is not / ({mnt.target})")
ecmds = [
"ro", "rootwait",
f"root={mnt.source}",
f"rootfstype={mnt.fstype}",
f"rootflags={mnt.options}",
]
scmds = " ".join(ecmds)
log.debug(f"add root cmdline {scmds}")
cmds.extend(ecmds)
self.builder.ctx.resolve_subscript()
def resolve_dev_tag(self, dev: str, mnt: MountPoint):
dev = dev.upper()
match dev:
case "UUID" | "LABEL":
log.warning(f"'{dev}=' maybe unsupported by kernel")
if dev in self.properties: val = self.properties[dev]
else: val = self.blkid.get_tag_value(None, dev, self.builder.device)
case "PARTUUID" | "PARTLABEL":
val = self.properties[dev] if dev in self.properties else None
case _: raise ArchBuilderConfigError(f"unsupported device type {dev}")
if not val: raise ArchBuilderConfigError(f"property {dev} not found")
mnt.source = f"{dev}={val}"
def proc_grow(self, cfg: dict, mnt: MountPoint):
root = self.builder.ctx.get_rootfs()
if "ptype" not in cfg: raise ArchBuilderConfigError("no ptype set for grow")
ptype = DiskTypesGPT.lookup_one_uuid(cfg["ptype"])
if ptype is None: raise ArchBuilderConfigError(f"unknown type {cfg['ptype']}")
mnt.option.append("x-systemd.growfs")
conf = "grow-%s.conf" % path_to_name(mnt.target)
repart = os.path.join(root, "etc/repart.d", conf)
os.makedirs(os.path.dirname(repart), mode=0o0755, exist_ok=True)
fsname, fsuuid = None, None
dev = self.builder.device
if "fsname" in cfg: fsname = cfg["fsname"]
if "fsuuid" in cfg: fsuuid = cfg["fsuuid"]
if fsname is None: fsname = self.blkid.get_tag_value(None, "LABEL", dev)
if fsuuid is None: fsuuid = self.blkid.get_tag_value(None, "UUID", dev)
with open(repart, "w") as f:
f.write("[Partition]\n")
f.write(f"Type={ptype}\n")
f.write(f"Format={mnt.fstype}\n")
if fsname: f.write(f"Label={fsname}\n")
if fsuuid: f.write(f"UUID={fsuuid}\n")
log.info(f"generated repart config {repart}")
def proc_fstab(self, cfg: dict):
mnt = MountPoint()
ccfg = self.builder.ctx.config
fstab = cfg["fstab"] if "fstab" in cfg else {}
rfstab = ccfg["fstab"] if "fstab" in ccfg else {}
mnt.target = cfg["mount"]
mnt.fstype = cfg["fstype"]
dev = None
if "dev" in fstab: dev = fstab["dev"]
if "dev" in rfstab: dev = rfstab["dev"]
if dev: self.resolve_dev_tag(dev, mnt)
if mnt.target == "/": mnt.fs_passno = 1
elif not mnt.virtual: mnt.fs_passno = 2
if "target" in fstab: mnt.target = fstab["target"]
if "source" in fstab: mnt.source = fstab["source"]
if "type" in fstab: mnt.fstype = fstab["type"]
if "dump" in fstab: mnt.fs_freq = fstab["dump"]
if "passno" in fstab: mnt.fs_passno = fstab["passno"]
if "flags" in fstab:
flags = fstab["flags"]
if type(flags) is str: mnt.options = flags
elif type(flags) is list: mnt.option = flags
else: raise ArchBuilderConfigError("bad flags")
if mnt.source is None or mnt.target is None:
raise ArchBuilderConfigError("incomplete fstab")
if len(self.builder.ctx.fstab.find_target(mnt.target)) > 0:
raise ArchBuilderConfigError(f"duplicate fstab target {mnt.target}")
if mnt.fstype in self.fstype_map:
mnt.fstype = self.fstype_map[mnt.fstype]
if "grow" in cfg and cfg["grow"]:
self.proc_grow(cfg, mnt)
mnt.fixup()
log.debug(f"add fstab entry {mnt}")
self.builder.ctx.fstab.append(mnt)
self.builder.ctx.fsmap[mnt.source] = self.builder.device
if "boot" in fstab and fstab["boot"]:
self.proc_cmdline_root(cfg, mnt)
def format(self, fstype: str):
from builder.disk.filesystem.creator import FileSystemCreators
FileSystemCreators.init()
t = FileSystemCreators.find_builder(fstype)
if t is None: raise ArchBuilderConfigError(f"unsupported fs type {fstype}")
creator = t(fstype, self, self.builder.config)
creator.create()
def build(self):
cfg = self.builder.config
if "fstype" not in cfg:
raise ArchBuilderConfigError("fstype not set")
fstype = cfg["fstype"]
self.format(fstype)
if "mount" in cfg:
self.proc_fstab(cfg)

View File

@ -0,0 +1,43 @@
from builder.lib.context import ArchBuilderContext
from builder.disk.filesystem.build import FileSystemBuilder
class FileSystemCreator:
builder: FileSystemBuilder
config: dict
fstype: str
device: str
ctx: ArchBuilderContext
def __init__(
self,
fstype: str,
builder: FileSystemBuilder,
config: dict
):
self.builder = builder
self.config = config
self.fstype = fstype
self.device = builder.builder.device
self.ctx = builder.builder.ctx
def create(self): pass
class FileSystemCreators:
types: list[tuple[str, type[FileSystemCreator]]] = [
]
@staticmethod
def init():
if len(FileSystemCreators.types) > 0: return
from builder.disk.filesystem.types import types
FileSystemCreators.types.extend(types)
@staticmethod
def find_builder(name: str) -> type[FileSystemCreator]:
return next((
t[1]
for t in FileSystemCreators.types
if name == t[0]
), None)

View File

@ -0,0 +1,17 @@
import os
from builder.disk.filesystem.creator import FileSystemCreator
class EXT4Creator(FileSystemCreator):
def create(self):
cmds: list[str] = ["mke2fs"]
if self.fstype not in ["ext2", "ext3", "ext4"]:
raise RuntimeError(f"unsupported fs {self.fstype}")
cmds.extend(["-t", self.fstype])
if "fsname" in self.config: cmds.extend(["-L", self.config["fsname"]])
if "fsuuid" in self.config: cmds.extend(["-U", self.config["fsuuid"]])
env = os.environ.copy()
env["MKE2FS_DEVICE_SECTSIZE"] = str(self.builder.builder.sector)
cmds.append(self.device)
ret = self.ctx.run_external(cmds, env=env)
if ret != 0: raise OSError("mke2fs failed")

View File

@ -0,0 +1,17 @@
from builder.disk.filesystem.creator import FileSystemCreator
from builder.disk.filesystem.btrfs import BtrfsCreator
from builder.disk.filesystem.ext4 import EXT4Creator
from builder.disk.filesystem.vfat import FatCreator
types: list[tuple[str, type[FileSystemCreator]]] = [
("ext2", EXT4Creator),
("ext3", EXT4Creator),
("ext4", EXT4Creator),
("vfat", FatCreator),
("fat12", FatCreator),
("fat16", FatCreator),
("fat32", FatCreator),
("msdos", FatCreator),
("btrfs", BtrfsCreator),
]

View File

@ -0,0 +1,22 @@
from builder.disk.filesystem.creator import FileSystemCreator
from builder.lib.config import ArchBuilderConfigError
class FatCreator(FileSystemCreator):
def create(self):
cmds: list[str] = ["mkfs.fat"]
bits: int = 0
match self.fstype:
case "vfat": bits = 32
case "fat12": bits = 12
case "fat16": bits = 16
case "fat32": bits = 32
case "msdos": bits = 16
case _: raise ArchBuilderConfigError("unknown fat type")
cmds.append(f"-F{bits}")
if "fsname" in self.config: cmds.extend(["-n", self.config["fsname"]])
if "fsvolid" in self.config: cmds.extend(["-i", self.config["fsvolid"]])
cmds.extend(["-S", str(self.builder.builder.sector)])
cmds.append(self.device)
ret = self.ctx.run_external(cmds)
if ret != 0: raise OSError("mkfs.fat failed")

123
builder/disk/image.py Normal file
View File

@ -0,0 +1,123 @@
import os
import stat
from typing import Self
from logging import getLogger
from builder.lib.loop import loop_setup
from builder.lib.utils import size_to_bytes
from builder.lib.config import ArchBuilderConfigError
from builder.lib.context import ArchBuilderContext
log = getLogger(__name__)
class ImageBuilder:
offset: int = 0
size: int = 0
sector: int = 512
type: str = None
output: str = None
device: str = None
loop: bool = False
config: dict = {}
parent: Self = None
ctx: ArchBuilderContext = None
properties: dict = {}
def create_image(self):
if self.device: raise ValueError("device is set")
if self.output is None: raise ArchBuilderConfigError(
"no output set for image"
)
fd, recreate = -1, False
if os.path.exists(self.output):
st = os.stat(self.output)
if stat.S_ISBLK(st.st_mode):
log.debug(f"target {self.output} is a block device")
if self.size != 0: raise ArchBuilderConfigError(
"cannot use size field when output is a device"
)
elif stat.S_ISREG(st.st_mode):
log.debug(f"target {self.output} exists, removing")
recreate = True
os.remove(self.output)
else: raise ArchBuilderConfigError("target is not a file")
else: recreate = True
if recreate:
try:
if self.size == 0: raise ArchBuilderConfigError("size is not set")
log.info(f"creating {self.output} with {self.size} bytes")
flags = os.O_RDWR | os.O_CREAT | os.O_TRUNC
fd = os.open(self.output, flags=flags, mode=0o0644)
os.posix_fallocate(fd, 0, self.size)
finally:
if fd >= 0: os.close(fd)
def setup_loop(self):
target = self.output if self.parent is None else self.parent.device
if target is None: raise ArchBuilderConfigError("no target for image")
log.debug(f"try to create loop device from {target}")
log.debug(f"loop offset: {self.offset}, size: {self.size}, sector {self.sector}")
dev = loop_setup(
path=target,
size=self.size,
offset=self.offset,
block_size=self.sector,
)
log.info(f"created loop device {dev} from {target}")
self.ctx.loops.append(dev)
self.loop = True
self.device = dev
def __init__(
self,
ctx: ArchBuilderContext,
config: dict,
parent: Self = None
):
self.ctx = ctx
self.config = config
self.parent = parent
self.offset = 0
self.size = 0
self.sector = 512
self.loop = False
self.properties = {}
if "output" in config: self.output = config["output"]
if parent is None:
if self.output is None:
raise ArchBuilderConfigError("no output set for image")
if not self.output.startswith("/"):
self.output = os.path.join(ctx.get_output(), self.output)
else:
if parent.device is None: raise ArchBuilderConfigError(
"no device set for parent image"
)
self.sector = parent.sector
if "sector" in config: self.sector = size_to_bytes(config["sector"])
if "size" in config: self.size = size_to_bytes(config["size"])
if "type" in config: self.type = config["type"]
if self.type is None: raise ArchBuilderConfigError("no type set in image")
def build(self):
if self.device is None:
if self.output:
self.create_image()
self.setup_loop()
from builder.disk.content import ImageContentBuilders
ImageContentBuilders.init()
t = ImageContentBuilders.find_builder(self.type)
if t is None: raise ArchBuilderConfigError(
f"unsupported builder type {self.type}"
)
builder = t(self)
builder.properties.update(self.properties)
builder.build()
def proc_image(ctx: ArchBuilderContext):
if "image" not in ctx.config: return
builders: list[ImageBuilder] = []
for image in ctx.config["image"]:
builder = ImageBuilder(ctx, image)
builders.append(builder)
for builder in builders:
builder.build()

View File

@ -0,0 +1,27 @@
from builder.lib.area import Area, Areas
class DiskArea:
def find_free_area(
self,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None,
biggest: bool = True,
) -> Area:
return self.get_free_areas().find(
start, end, size, area, biggest
)
def get_free_size(self) -> int:
return sum(area.size for area in self.get_free_areas())
def get_usable_area(self) -> Area:
pass
def get_used_areas(self, table=False) -> Areas:
pass
def get_free_areas(self) -> Areas:
pass

View File

@ -0,0 +1,38 @@
from builder.disk.content import ImageContentBuilder
from builder.disk.layout.disk import Disk
from builder.disk.image import ImageBuilder
from builder.lib.config import ArchBuilderConfigError
from builder.lib.context import ArchBuilderContext
class DiskLayoutBuilder(ImageContentBuilder):
ctx: ArchBuilderContext
def build(self):
self.ctx = self.builder.ctx
cfg = self.builder.config
if "layout" not in cfg:
raise ArchBuilderConfigError("layout not set")
if "partitions" not in cfg:
raise ArchBuilderConfigError("partitions not set")
layout = Disk.find_layout(cfg["layout"])
if layout is None:
raise ArchBuilderConfigError(f"layout {layout} not found")
disk = layout(
path=self.builder.device,
sector=self.builder.sector
)
disk.create()
disk.set_from(cfg)
builders: list[ImageBuilder] = []
for part in cfg["partitions"]:
p = disk.add_partition_from(part)
if "type" in part:
b = ImageBuilder(self.ctx, part, self.builder)
if p.partlabel: b.properties["PARTLABEL"] = p.partlabel
if p.partuuid: b.properties["PARTUUID"] = p.partuuid
b.sector, b.offset, b.size = disk.sector, p.start, p.size
builders.append(b)
disk.save()
for builder in builders:
builder.build()

133
builder/disk/layout/dio.py Normal file
View File

@ -0,0 +1,133 @@
import os
import io
import stat
import fcntl
import ctypes
from logging import getLogger
from builder.disk.layout import ioctl
from builder.lib.utils import bytes_pad
log = getLogger(__name__)
class DiskIO:
_min_sector: int
_fp: io.RawIOBase
_opened: bool
_sector: int
_cached: dict
align: int
def load_block_info(self):
if self._fp is None: return
fd = self._fp.fileno()
st = os.fstat(fd)
if not stat.S_ISBLK(st.st_mode): return
try:
val = ctypes.c_uint()
fcntl.ioctl(fd, ioctl.BLKSSZGET, val)
log.debug(f"Block sector size: {val.value}")
self._sector = val.value
except: pass
try:
val = ctypes.c_uint64()
fcntl.ioctl(fd, ioctl.BLKGETSIZE64, val)
log.debug(f"Block total size: {val.value}")
self._cached["total_size"] = val.value
self._cached["total_lba"] = val.value // self._sector
except: pass
try:
val = ioctl.HDGeometry()
fcntl.ioctl(fd, ioctl.HDIO_GETGEO, val)
log.debug(f"Block heads: {val.heads.value}")
log.debug(f"Block sectors: {val.sectors.value}")
log.debug(f"Block cylinders: {val.cylinders.value}")
log.debug(f"Block start: {val.start.value}")
self._cached["heads"] = val.heads.value
self._cached["sectors"] = val.sectors.value
self._cached["cylinders"] = val.cylinders.value
self._cached["start"] = val.start.value
except: pass
@property
def sector(self) -> int:
return self._sector
@property
def align_lba(self) -> int:
return self.align // self.sector
@align_lba.setter
def align_lba(self, v: int):
self.align = v * self.sector
@property
def total_size(self) -> int:
if "total_size" in self._cached:
return self._cached["total_size"]
off = self._fp.tell()
try:
self._fp.seek(0, os.SEEK_END)
ret = int(self._fp.tell())
finally:
self._fp.seek(off, os.SEEK_SET)
return ret
@property
def total_lba(self) -> int:
if "total_lba" in self._cached:
return self._cached["total_lba"]
size = self.total_size
if size % self.sector != 0:
raise ValueError("size misaligned with sector size")
return size // self.sector
def seek_lba(self, lba: int) -> int:
if lba >= self.total_lba:
raise ValueError("lba out of file")
return self._fp.seek(self.sector * lba, os.SEEK_SET)
def read_lba(self, lba: int) -> bytes:
off = self._fp.tell()
try:
self.seek_lba(lba)
ret = self._fp.read(self.sector)
finally:
self._fp.seek(off, os.SEEK_SET)
return ret
def read_lbas(self, lba: int, count: int = 0) -> bytes:
return bytes().join(self.read_lba(lba + i) for i in range(count))
def write_lba(self, lba: int, b: bytes) -> int:
if not self._fp.writable():
raise IOError("write is not allow")
off = self._fp.tell()
try:
data = bytes_pad(b, self.sector, trunc=True)
self.seek_lba(lba)
ret = self._fp.write(data)
finally:
self._fp.seek(off, os.SEEK_SET)
return ret
def write_lbas(self, lba: int, b: bytes, count: int = 0) -> bytes:
s = self.sector
if count == 0:
if len(b) % s != 0: raise ValueError(
"buffer misaligned with sector size"
)
count = len(b) // s
if count * s > len(b):
raise ValueError("buffer too small")
for i in range(count):
t = b[i * s:(i + 1) * s]
self.write_lba(lba + i, t)
return b
def __init__(self):
self._min_sector = 512
self._fp = None
self._opened = False
self._sector = 0
self._cached = {}
self.align = 0x100000

View File

@ -0,0 +1,37 @@
from io import RawIOBase
from builder.disk.layout.layout import DiskLayout
from builder.disk.layout.mbr.layout import DiskLayoutMBR
from builder.disk.layout.gpt.layout import DiskLayoutGPT
class Disk:
layouts: list[tuple[type[DiskLayout], list[str]]] = [
(DiskLayoutGPT, ["gpt", "guid", "efi", "uefi"]),
(DiskLayoutMBR, ["mbr", "bios", "legacy", "msdos", "dos"]),
]
@staticmethod
def probe_layout(
fp: RawIOBase = None,
path: str = None,
sector: int = 512,
fallback: str | type[DiskLayout] = None,
) -> DiskLayout | None:
for layout in Disk.layouts:
d = layout[0](fp, path, sector)
if d.loaded: return d
if fallback:
if type(fallback) is str:
fallback = Disk.find_layout(fallback)
if type(fallback) is type[DiskLayout]:
d = fallback(fp, path, sector)
if d.loaded: return d
return None
@staticmethod
def find_layout(name: str) -> type[DiskLayout]:
return next((
layout[0]
for layout in Disk.layouts
if name in layout[1]
), None)

View File

@ -0,0 +1,402 @@
import ctypes
from io import RawIOBase
from uuid import UUID, uuid4
from ctypes import sizeof
from binascii import crc32
from logging import getLogger
from builder.lib.area import Area, Areas
from builder.lib.utils import bytes_pad, round_up, round_down
from builder.disk.layout.layout import DiskLayout
from builder.disk.layout.mbr.types import DiskTypesMBR
from builder.disk.layout.mbr.struct import MasterBootRecord, MbrPartEntry
from builder.disk.layout.gpt.struct import EfiPartTableHeader, EfiPartEntry
from builder.disk.layout.gpt.types import DiskTypesGPT
from builder.disk.layout.gpt.uefi import EfiGUID
from builder.disk.layout.gpt.part import DiskPartGPT
log = getLogger(__name__)
NULL_UUID = UUID("00000000-0000-0000-0000-000000000000")
class DiskLayoutGPT(DiskLayout):
boot_code: bytes
uuid: UUID
main_entries_lba: int
entries_count: int
partitions: list[DiskPartGPT]
@property
def id(self) -> str:
return str(self.uuid)
@id.setter
def id(self, val: str):
self.uuid = UUID(val)
@property
def entries_size(self) -> int:
return self.entries_count * sizeof(EfiPartEntry)
@property
def entries_sectors(self) -> int:
return self.entries_size // self.sector
@property
def backup_entries_lba(self) -> int:
return self.total_lba - self.entries_sectors - 1
@property
def main_entries(self) -> Area:
return Area(
start=self.main_entries_lba,
size=self.entries_sectors
).fixup()
@property
def backup_entries(self) -> Area:
return Area(
start=self.backup_entries_lba,
size=self.entries_sectors
).fixup()
def add_partition(
self,
ptype: str | UUID = None,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None,
name: str = None,
uuid: UUID = None,
) -> DiskPartGPT | None:
area = self.find_free_area(start, end, size, area)
if area is None: return None
if ptype is None: ptype = "linux"
t = DiskTypesGPT.lookup_one_uuid(ptype)
if t is None: raise ValueError(f"unknown type {ptype}")
self.resort_partitions()
idx = len(self.partitions)
part = DiskPartGPT(self, None, idx)
part.start_lba = area.start
part.end_lba = area.end
part.type_uuid = t
part.uuid = uuid or uuid4()
part.part_name = name or ""
self.partitions.insert(idx, part)
log.info(
f"Added partition {idx} "
f"start LBA {area.start} "
f"end LBA {area.end} "
f"type {ptype}"
)
return part
def add_partition_from(self, config: dict) -> DiskPartGPT:
area = self.parse_free_area(config)
if area is None: raise ValueError("no free area found")
ptype = config["ptype"] if "ptype" in config else None
pname = config["pname"] if "pname" in config else None
puuid = UUID(config["puuid"]) if "puuid" in config else None
part = self.add_partition(ptype, area=area, name=pname, uuid=puuid)
if part:
if "attributes" in config:
part.attributes = config["attributes"]
return part
def get_usable_area(self) -> Area | None:
if self.main_entries_lba < 2: return None
if self.entries_count <= 0: return None
start = 2
end = round_down(self.backup_entries_lba, self.align_lba)
rs = min((part.start_lba for part in self.partitions), default=end)
first = self.main_entries_lba + self.entries_sectors + 1
if len(self.partitions) == 0 or first <= rs: start = first
start = round_up(start, self.align_lba)
return Area(start=start, end=end - 1).fixup()
def get_used_areas(self, table=False) -> Areas:
areas = Areas()
if table:
areas.add(start=0, size=2)
areas.add(area=self.main_entries)
areas.add(area=self.backup_entries)
areas.add(start=self.total_lba - 1, size=1)
for part in self.partitions:
areas.add(area=part.to_area())
areas.merge()
return areas
def get_free_areas(self) -> Areas:
areas = Areas()
usable = self.get_usable_area()
if usable is None: return areas
areas.add(area=usable)
for part in self.partitions:
areas.splice(area=part.to_area())
areas.align(self.align_lba)
return areas
def try_load_pmbr(self) -> MasterBootRecord | None:
pmbr_data = self.read_lba(0)
pmbr = MasterBootRecord.from_buffer_copy(pmbr_data)
if not pmbr.check_signature():
log.debug("Bad protective MBR")
return None
self.boot_code = pmbr.boot_code
return pmbr
def get_pmbr_entry(self, pmbr: MasterBootRecord) -> MbrPartEntry | None:
if pmbr is None: return None
ps = pmbr.partitions
tid = DiskTypesMBR.lookup_one_id("gpt")
return next((part for part in ps if part.type_id == tid), None)
def try_load_entries(self, gpt: EfiPartTableHeader) -> bool:
if gpt is None: return False
es = sizeof(EfiPartEntry)
if gpt.entry_size != es:
log.debug("Unsupported GPT entry size")
log.debug(f"size {es} != {gpt.entry_size}")
return False
size = gpt.entries_count * gpt.entry_size
sectors = size // self.sector
if size % self.sector != 0:
log.debug("GPT entries size misaligned with sector size")
sectors += 1
parts = self.read_lbas(gpt.part_entry_lba, sectors)
crc = crc32(parts[0:size], 0)
if crc != gpt.entries_crc32:
log.debug("GPT entries crc32 mismatch")
log.debug(f"crc32 {crc} != {gpt.entries_crc32}")
return False
self.partitions.clear()
for idx in range(gpt.entries_count):
start = idx * gpt.entry_size
size = min(es, gpt.entry_size)
data = parts[start:start + size]
entry = EfiPartEntry.from_buffer_copy(data)
if entry.type_guid.to_uuid() == NULL_UUID: continue
idx = len(self.partitions)
part = DiskPartGPT(self, entry, idx)
self.partitions.insert(idx, part)
log.debug(
f"Found partition {idx} "
f"start LBA {part.start_lba} "
f"end LBA {part.end_lba} "
f"name {part.part_name} "
)
self.uuid = gpt.disk_guid.to_uuid()
self.main_entries_lba = gpt.part_entry_lba
self.entries_count = gpt.entries_count
log.info("Found %d partitions in GPT", len(self.partitions))
return True
def try_load_lba(self, lba: int) -> EfiPartTableHeader:
log.debug(f"Try GPT at LBA {lba}")
gpt_data = self.read_lba(lba)
gpt = EfiPartTableHeader.from_buffer_copy(gpt_data)
if gpt and gpt.check_header():
log.debug(f"Loaded GPT at LBA {lba}")
else:
log.debug(f"Bad GPT at LBA {lba}")
gpt = None
return gpt
def try_load_gpt(self, pmbr: MasterBootRecord) -> bool:
lba = -1
pent = self.get_pmbr_entry(pmbr)
if pent:
lba = pent.start_lba
gpt = self.try_load_lba(lba)
if self.try_load_entries(gpt): return True
if lba != 1:
lba = 1
gpt = self.try_load_lba(lba)
if self.try_load_entries(gpt): return True
log.debug("Main GPT table unavailable")
lba = -1
if pent:
lba = pent.size_lba - 1
gpt = self.try_load_lba(lba)
if self.try_load_entries(gpt): return True
last = self.total_lba - 1
if lba != last:
lba = 1
gpt = self.try_load_lba(last)
if self.try_load_entries(gpt): return True
log.debug("Backup GPT table unavailable")
return False
def load_header(self) -> bool:
self.unload()
pmbr = self.try_load_pmbr()
if pmbr:
pent = self.get_pmbr_entry(pmbr)
if pent is None:
log.debug("GPT not found in PMBR")
return False
if not self.try_load_gpt(pmbr): return False
log.info("GPT partition tables loaded")
self.loaded = True
return True
def create_pmbr(self) -> MasterBootRecord:
new_pmbr = MasterBootRecord()
new_pmbr.fill_header()
if self.boot_code:
size = MasterBootRecord.boot_code.size
code = bytes_pad(self.boot_code, size, trunc=True)
ctypes.memmove(new_pmbr.boot_code, code, size)
ppart = MbrPartEntry()
ppart.start_lba = 1
ppart.size_lba = self.total_lba - 1
ppart.start_head, ppart.start_track, ppart.start_sector = 0, 0, 2
ppart.end_head, ppart.end_track, ppart.end_sector = 255, 255, 255
ppart.set_type("gpt")
new_pmbr.partitions[0] = ppart
return new_pmbr
def create_gpt_entries(self) -> bytes:
es = sizeof(EfiPartEntry)
ec = self.entries_count if self.entries_count > 0 else 128
if len(self.partitions) > ec:
raise OverflowError("too many partitions")
self.resort_partitions()
data = bytes().join(
part.to_entry()
for part in self.partitions
if part.type_uuid != NULL_UUID
)
if len(data) > ec * es:
raise OverflowError("partitions buffer too big")
return bytes_pad(data, ec * es)
def create_gpt_head(
self,
entries: bytes,
backup: bool = False
) -> EfiPartTableHeader:
if self.total_lba < 128:
raise ValueError("disk too small")
new_gpt = EfiPartTableHeader()
new_gpt.fill_header()
new_gpt.entry_size = sizeof(EfiPartEntry)
new_gpt.entries_count = self.entries_count
new_gpt.disk_guid = EfiGUID.from_uuid(self.uuid)
le = len(entries)
if le != new_gpt.entries_count * new_gpt.entry_size:
raise ValueError("entries size mismatch")
if le % self.sector != 0:
raise ValueError("bad entries size")
entries_sectors = le // self.sector
if entries_sectors != self.entries_sectors:
raise ValueError("entries sectors mismatch")
usable = self.get_usable_area()
new_gpt.first_usable_lba = usable.start
new_gpt.last_usable_lba = usable.end
if backup:
new_gpt.part_entry_lba = self.backup_entries_lba
new_gpt.current_lba = self.total_lba - 1
new_gpt.alternate_lba = 1
else:
new_gpt.part_entry_lba = self.main_entries_lba
new_gpt.current_lba = 1
new_gpt.alternate_lba = self.total_lba - 1
new_gpt.entries_crc32 = crc32(entries)
new_gpt.header.update_crc32(bytes(new_gpt))
return new_gpt
def recreate_header(self) -> dict:
new_pmbr = self.create_pmbr()
if new_pmbr is None: raise RuntimeError("generate pmbr failed")
log.debug(f"Protective MBR: {new_pmbr}")
new_gpt_entries = self.create_gpt_entries()
if new_gpt_entries is None: raise RuntimeError("generate gpt entries failed")
new_gpt_main = self.create_gpt_head(new_gpt_entries, backup=False)
if new_gpt_main is None: raise RuntimeError("generate gpt main head failed")
log.debug(f"GPT Main head: {new_gpt_main}")
new_gpt_back = self.create_gpt_head(new_gpt_entries, backup=True)
if new_gpt_back is None: raise RuntimeError("generate gpt backup head failed")
log.debug(f"GPT Backup head: {new_gpt_back}")
return {
"pmbr": new_pmbr,
"main": new_gpt_main,
"backup": new_gpt_back,
"entries": new_gpt_entries,
}
def write_table(self, table, lba: int):
data = bytes(table)
size = round_up(len(data), self.sector)
data = bytes_pad(data, size)
sectors = size // self.sector
area = Area(start=lba, size=sectors)
if self.get_used_areas().is_area_in(area):
raise RuntimeError("attempt write table into partition")
log.debug(f"Wrote {len(data)} bytes to LBA {lba} with {sectors} sectors")
self.write_lbas(lba, data, sectors)
def write_header(self):
if not self._fp.writable():
raise IOError("write is not allow")
data = self.recreate_header()
self.write_table(data["pmbr"], 0)
self.write_table(data["main"], data["main"].current_lba)
self.write_table(data["backup"], data["backup"].current_lba)
self.write_table(data["entries"], data["main"].part_entry_lba)
self.write_table(data["entries"], data["backup"].part_entry_lba)
self._fp.flush()
log.info("GPT partition table saved")
def unload(self):
self.boot_code = bytes()
self.uuid = uuid4()
self.main_entries_lba = 2
self.entries_count = 128
self.loaded = False
self.partitions.clear()
def reload(self):
if not self.load_header():
raise IOError("Load GPT header failed")
def save(self):
self.write_header()
def create(self):
self.unload()
log.info("Created new GPT partition table")
def set_from(self, config: dict):
if "uuid" in config: self.uuid = UUID(config["uuid"])
if "entries_offset" in config:
self.main_entries_lba = self.size_to_sectors(config["entries_lba"])
if "entries_lba" in config:
self.main_entries_lba = config["entries_lba"]
if "entries_count" in config:
self.entries_count = config["entries_count"]
def to_dict(self) -> dict:
return {
"uuid": self.uuid,
"free": self.get_free_size(),
"sector": self.sector,
"sectors": self.total_lba,
"size": self.total_size,
"partitions": self.partitions,
"usable_area": self.get_usable_area(),
"free_area": self.get_free_areas(),
"entries_count": self.entries_count,
"main_entries": self.main_entries,
"backup_entries": self.backup_entries,
}
def __init__(self, fp: RawIOBase = None, path: str = None, sector: int = 512):
super().__init__(fp=fp, path=path, sector=sector)
self.boot_code = bytes()
self.uuid = NULL_UUID
self.main_entries_lba = -1
self.entries_count = -1
self.partitions = []
self.load_header()

View File

@ -0,0 +1,130 @@
from uuid import UUID
from logging import getLogger
from builder.disk.layout.layout import DiskLayout, DiskPart
from builder.disk.layout.gpt.struct import EfiPartEntry
from builder.disk.layout.gpt.types import DiskTypesGPT
from builder.disk.layout.gpt.uefi import EfiGUID
log = getLogger(__name__)
class DiskPartGPT(DiskPart):
layout: DiskLayout
type_uuid: UUID
uuid: UUID
idx: int
_part_name: str
_attributes: int
_start_lba: int
_end_lba: int
@property
def part_name(self) -> str: return self._part_name
@part_name.setter
def part_name(self, name: str): self._part_name = name
@property
def id(self) -> str:
return str(self.uuid)
@id.setter
def id(self, val: str):
self.uuid = UUID(val)
@property
def type(self) -> str:
return DiskTypesGPT.lookup_one_name(self.type_uuid)
@type.setter
def type(self, val: str):
tid = DiskTypesGPT.lookup_one_uuid(val)
if tid is None: raise ValueError(f"unknown type {val}")
self.type_uuid = tid
@property
def start_lba(self) -> int:
return self._start_lba
@start_lba.setter
def start_lba(self, start_lba: int):
self._start_lba = start_lba
@property
def end_lba(self) -> int:
return self._end_lba
@end_lba.setter
def end_lba(self, end_lba: int):
self._end_lba = end_lba
@property
def size_lba(self) -> int:
return self.end_lba - self.start_lba + 1
@size_lba.setter
def size_lba(self, size_lba: int):
self.end_lba = size_lba + self.start_lba - 1
@property
def attributes(self) -> int:
return self._attributes
@attributes.setter
def attributes(self, attributes: int):
self._attributes = attributes
@property
def partlabel(self) -> str:
return self.part_name
@property
def partuuid(self) -> str:
return self.id
def load_entry(self, part: EfiPartEntry):
self.type_uuid = part.type_guid.to_uuid()
self.uuid = part.unique_guid.to_uuid()
self.start_lba = part.start_lba
self.end_lba = part.end_lba
self.attributes = part.attributes
self.part_name = part.get_part_name()
def to_entry(self) -> EfiPartEntry:
part = EfiPartEntry()
part.type_guid = EfiGUID.from_uuid(self.type_uuid)
part.unique_guid = EfiGUID.from_uuid(self.uuid)
part.start_lba = self.start_lba
part.end_lba = self.end_lba
part.attributes = self.attributes
part.set_part_name(self.part_name)
return part
def __init__(
self,
layout: DiskLayout,
part: EfiPartEntry | None,
idx: int
):
super().__init__()
self.layout = layout
self.idx = idx
self.part_name = None
self.start_lba = 0
self.end_lba = 0
self.attributes = 0
if part: self.load_entry(part)
from builder.disk.layout.gpt.layout import DiskLayoutGPT
if not isinstance(layout, DiskLayoutGPT):
raise TypeError("require DiskLayoutGPT")
def to_dict(self) -> dict:
return {
"type_uuid": self.type_uuid,
"type_name": self.type,
"uuid": self.uuid,
"part_name": self.part_name,
"attributes": self.attributes,
"start_lba": self.start_lba,
"end_lba": self.end_lba,
"size_lba": self.size_lba,
}

View File

@ -0,0 +1,135 @@
import ctypes
from uuid import UUID
from logging import getLogger
from builder.lib.utils import bytes_pad
from builder.lib.serializable import SerializableDict
from builder.disk.layout.gpt.types import DiskTypesGPT
from builder.disk.layout.gpt.uefi import EfiTableHeader, EfiGUID
log = getLogger(__name__)
class EfiPartTableHeader(ctypes.Structure, SerializableDict):
_pack_ = 1
_fields_ = [
("header", EfiTableHeader),
("current_lba", ctypes.c_uint64),
("alternate_lba", ctypes.c_uint64),
("first_usable_lba", ctypes.c_uint64),
("last_usable_lba", ctypes.c_uint64),
("disk_guid", EfiGUID),
("part_entry_lba", ctypes.c_uint64),
("entries_count", ctypes.c_uint32),
("entry_size", ctypes.c_uint32),
("entries_crc32", ctypes.c_uint32),
]
EFI_PART_SIGN = b'EFI PART'
@property
def signature(self) -> ctypes.c_uint64:
return self.header.signature
@property
def revision(self) -> ctypes.c_uint64:
return self.header.revision
@property
def header_size(self) -> ctypes.c_uint64:
return self.header.header_size
@property
def crc32(self) -> ctypes.c_uint64:
return self.header.crc32
def fill_header(self):
self.header.set_signature(self.EFI_PART_SIGN)
self.header.header_size = 92
self.header.revision = 0x00010000
def check_header(self) -> bool:
if not self.header.check_signature(self.EFI_PART_SIGN):
log.debug("GPT signature mismatch")
return False
if self.header.header_size < 92:
log.debug("GPT header size too small")
log.debug(f"{self.header.header_size} < 92")
return False
if not self.header.check_revision(1, 0):
log.debug("GPT revision mismatch")
log.debug(f"{self.header.get_revision()} != 1.0")
return False
if not self.header.check_crc32():
log.debug("GPT crc32 check failed")
return False
if self.entry_size != 128:
log.debug("GPT entry size unsupported")
log.debug(f"{self.entry_size} != 128")
return False
return True
def to_dict(self) -> dict:
return {
"header": self.header,
"current_lba": self.current_lba,
"alternate_lba": self.alternate_lba,
"first_usable_lba": self.first_usable_lba,
"last_usable_lba": self.last_usable_lba,
"disk_guid": str(self.disk_guid),
"part_entry_lba": self.part_entry_lba,
"entries_count": self.entries_count,
"entry_size": self.entry_size,
"entries_crc32": self.entries_crc32,
}
class EfiPartEntry(ctypes.Structure, SerializableDict):
_pack_ = 1
_fields_ = [
("type_guid", EfiGUID),
("unique_guid", EfiGUID),
("start_lba", ctypes.c_uint64),
("end_lba", ctypes.c_uint64),
("attributes", ctypes.c_uint64),
("part_name", ctypes.c_byte * 72),
]
def get_type_name(self) -> str:
return DiskTypesGPT.lookup_one_name(self.type_guid)
def get_type_uuid(self) -> UUID:
return DiskTypesGPT.lookup_one_uuid(self.type_guid)
def set_type(self, t: EfiGUID | UUID | str):
g = DiskTypesGPT.lookup_one_guid(t)
if g is None: raise ValueError(f"bad type {t}")
self.type_guid = g
def get_part_name(self) -> str:
return self.part_name.decode("UTF-16LE").rstrip('\u0000')
def set_part_name(self, name: str):
size = EfiPartEntry.part_name.size
data = name.encode("UTF-16LE")
if len(data) >= size: raise ValueError("name too long")
data = bytes_pad(data, size)
ctypes.memmove(self.part_name, data, size)
def check_type(self, t: EfiGUID | UUID | str) -> bool:
return DiskTypesGPT.equal(self.type_guid, t)
@property
def total_lba(self):
return self.end_lba - self.start_lba + 1
def to_dict(self) -> dict:
return {
"type_guid": str(self.type_guid),
"unique_guid": str(self.unique_guid),
"start_lba": self.start_lba,
"end_lba": self.end_lba,
"attributes": self.attributes,
"part_name": self.get_part_name(),
}
assert(ctypes.sizeof(EfiPartTableHeader()) == 92)
assert(ctypes.sizeof(EfiPartEntry()) == 128)

View File

@ -0,0 +1,263 @@
from uuid import UUID
from logging import getLogger
from builder.disk.layout.gpt.uefi import EfiGUID
from builder.disk.layout.types import DiskTypes
log = getLogger(__name__)
class DiskTypesGPT(DiskTypes):
@staticmethod
def lookup(t) -> list[tuple[UUID, str]]:
ret = []
ts = DiskTypesGPT.types
from builder.disk.layout.gpt.part import DiskPartGPT
from builder.disk.layout.gpt.struct import EfiPartEntry
if isinstance(t, DiskPartGPT):
u = t.type_uuid
elif isinstance(t, EfiPartEntry):
u = t.type_guid.to_uuid()
elif type(t) is EfiGUID:
u = t.to_uuid()
elif type(t) is UUID:
u = t
elif type(t) is str:
ret = [tn for tn in ts if tn[1] == t]
if len(ret) > 0: return ret
try: u = UUID(t)
except: return ret
else: return ret
return [tn for tn in ts if tn[0] == u]
def lookup_one(t) -> tuple[UUID, str]:
l = DiskTypesGPT.lookup(t)
return l[0] if len(l) > 0 else None
@staticmethod
def lookup_one_uuid(t) -> UUID:
r = DiskTypesGPT.lookup_one(t)
return r[0] if r else None
@staticmethod
def lookup_one_guid(t) -> EfiGUID:
u = DiskTypesGPT.lookup_one_uuid(t)
return EfiGUID.from_uuid(u)
@staticmethod
def lookup_one_name(t) -> str:
r = DiskTypesGPT.lookup_one(t)
return r[1] if r else None
@staticmethod
def lookup_names(t) -> list[str]:
r = DiskTypesGPT.lookup(t)
return [t[1] for t in r]
@staticmethod
def equal(l, r) -> bool:
lf = DiskTypesGPT.lookup_one_uuid(l)
rf = DiskTypesGPT.lookup_one_uuid(r)
if lf is None or rf is None: return False
return lf == rf
types: list[tuple[UUID, str]] = [
(UUID("C12A7328-F81F-11D2-BA4B-00A0C93EC93B"), "efi"),
(UUID("C12A7328-F81F-11D2-BA4B-00A0C93EC93B"), "uefi"),
(UUID("C12A7328-F81F-11D2-BA4B-00A0C93EC93B"), "esp"),
(UUID("024DEE41-33E7-11D3-9D69-0008C781F39F"), "mbr-part-scheme"),
(UUID("D3BFE2DE-3DAF-11DF-BA40-E3A556D89593"), "intel-fast-flash"),
(UUID("21686148-6449-6E6F-744E-656564454649"), "bios"),
(UUID("21686148-6449-6E6F-744E-656564454649"), "bios-boot"),
(UUID("F4019732-066E-4E12-8273-346C5641494F"), "sony-boot-partition"),
(UUID("BFBFAFE7-A34F-448A-9A5B-6213EB736C22"), "lenovo-boot-partition"),
(UUID("9E1A2D38-C612-4316-AA26-8B49521E5A8B"), "powerpc-prep-boot"),
(UUID("7412F7D5-A156-4B13-81DC-867174929325"), "onie-boot"),
(UUID("D4E6E2CD-4469-46F3-B5CB-1BFF57AFC149"), "onie-config"),
(UUID("E3C9E316-0B5C-4DB8-817D-F92DF00215AE"), "microsoft-reserved"),
(UUID("E3C9E316-0B5C-4DB8-817D-F92DF00215AE"), "msr"),
(UUID("EBD0A0A2-B9E5-4433-87C0-68B6B72699C7"), "microsoft-basic-data"),
(UUID("EBD0A0A2-B9E5-4433-87C0-68B6B72699C7"), "basic"),
(UUID("5808C8AA-7E8F-42E0-85D2-E1E90434CFB3"), "microsoft-ldm-metadata"),
(UUID("AF9B60A0-1431-4F62-BC68-3311714A69AD"), "microsoft-ldm-data"),
(UUID("DE94BBA4-06D1-4D40-A16A-BFD50179D6AC"), "windows-recovery-environment"),
(UUID("E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D"), "microsoft-storage-spaces"),
(UUID("75894C1E-3AEB-11D3-B7C1-7B03A0000000"), "hp-ux-data"),
(UUID("E2A1E728-32E3-11D6-A682-7B03A0000000"), "hp-ux-service"),
(UUID("0657FD6D-A4AB-43C4-84E5-0933C84B4F4F"), "linux-swap"),
(UUID("0FC63DAF-8483-4772-8E79-3D69D8477DE4"), "linux"),
(UUID("0FC63DAF-8483-4772-8E79-3D69D8477DE4"), "linux-filesystem"),
(UUID("3B8F8425-20E0-4F3B-907F-1A25A76F98E8"), "linux-server-data"),
(UUID("3B8F8425-20E0-4F3B-907F-1A25A76F98E8"), "linux-srv"),
(UUID("44479540-F297-41B2-9AF7-D131D5F0458A"), "linux-root-x86"),
(UUID("4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709"), "linux-root-x86-64"),
(UUID("6523F8AE-3EB1-4E2A-A05A-18B695AE656F"), "linux-root-alpha"),
(UUID("D27F46ED-2919-4CB8-BD25-9531F3C16534"), "linux-root-arc"),
(UUID("69DAD710-2CE4-4E3C-B16C-21A1D49ABED3"), "linux-root-arm"),
(UUID("B921B045-1DF0-41C3-AF44-4C6F280D3FAE"), "linux-root-arm64"),
(UUID("993D8D3D-F80E-4225-855A-9DAF8ED7EA97"), "linux-root-ia64"),
(UUID("77055800-792C-4F94-B39A-98C91B762BB6"), "linux-root-loongarch64"),
(UUID("37C58C8A-D913-4156-A25F-48B1B64E07F0"), "linux-root-mips32le"),
(UUID("700BDA43-7A34-4507-B179-EEB93D7A7CA3"), "linux-root-mips64le"),
(UUID("1AACDB3B-5444-4138-BD9E-E5C2239B2346"), "linux-root-hppa"),
(UUID("1DE3F1EF-FA98-47B5-8DCD-4A860A654D78"), "linux-root-ppc"),
(UUID("912ADE1D-A839-4913-8964-A10EEE08FBD2"), "linux-root-ppc64"),
(UUID("C31C45E6-3F39-412E-80FB-4809C4980599"), "linux-root-ppc64le"),
(UUID("60D5A7FE-8E7D-435C-B714-3DD8162144E1"), "linux-root-riscv32"),
(UUID("72EC70A6-CF74-40E6-BD49-4BDA08E8F224"), "linux-root-riscv64"),
(UUID("08A7ACEA-624C-4A20-91E8-6E0FA67D23F9"), "linux-root-s390"),
(UUID("5EEAD9A9-FE09-4A1E-A1D7-520D00531306"), "linux-root-s390x"),
(UUID("C50CDD70-3862-4CC3-90E1-809A8C93EE2C"), "linux-root-tilegx"),
(UUID("8DA63339-0007-60C0-C436-083AC8230908"), "linux-reserved"),
(UUID("933AC7E1-2EB4-4F13-B844-0E14E2AEF915"), "linux-home"),
(UUID("A19D880F-05FC-4D3B-A006-743F0F84911E"), "linux-raid"),
(UUID("E6D6D379-F507-44C2-A23C-238F2A3DF928"), "linux-lvm"),
(UUID("4D21B016-B534-45C2-A9FB-5C16E091FD2D"), "linux-variable-data"),
(UUID("4D21B016-B534-45C2-A9FB-5C16E091FD2D"), "linux-var-data"),
(UUID("4D21B016-B534-45C2-A9FB-5C16E091FD2D"), "linux-var"),
(UUID("7EC6F557-3BC5-4ACA-B293-16EF5DF639D1"), "linux-temporary-data"),
(UUID("7EC6F557-3BC5-4ACA-B293-16EF5DF639D1"), "linux-tmp-data"),
(UUID("7EC6F557-3BC5-4ACA-B293-16EF5DF639D1"), "linux-tmp"),
(UUID("75250D76-8CC6-458E-BD66-BD47CC81A812"), "linux-usr-x86"),
(UUID("8484680C-9521-48C6-9C11-B0720656F69E"), "linux-usr-x86-64"),
(UUID("E18CF08C-33EC-4C0D-8246-C6C6FB3DA024"), "linux-usr-alpha"),
(UUID("7978A683-6316-4922-BBEE-38BFF5A2FECC"), "linux-usr-arc"),
(UUID("7D0359A3-02B3-4F0A-865C-654403E70625"), "linux-usr-arm"),
(UUID("B0E01050-EE5F-4390-949A-9101B17104E9"), "linux-usr-arm64"),
(UUID("4301D2A6-4E3B-4B2A-BB94-9E0B2C4225EA"), "linux-usr-ia64"),
(UUID("E611C702-575C-4CBE-9A46-434FA0BF7E3F"), "linux-usr-loongarch64"),
(UUID("0F4868E9-9952-4706-979F-3ED3A473E947"), "linux-usr-mips32le"),
(UUID("C97C1F32-BA06-40B4-9F22-236061B08AA8"), "linux-usr-mips64le"),
(UUID("DC4A4480-6917-4262-A4EC-DB9384949F25"), "linux-usr-hppa"),
(UUID("7D14FEC5-CC71-415D-9D6C-06BF0B3C3EAF"), "linux-usr-ppc"),
(UUID("2C9739E2-F068-46B3-9FD0-01C5A9AFBCCA"), "linux-usr-ppc64"),
(UUID("15BB03AF-77E7-4D4A-B12B-C0D084F7491C"), "linux-usr-ppc64le"),
(UUID("B933FB22-5C3F-4F91-AF90-E2BB0FA50702"), "linux-usr-riscv32"),
(UUID("BEAEC34B-8442-439B-A40B-984381ED097D"), "linux-usr-riscv64"),
(UUID("CD0F869B-D0FB-4CA0-B141-9EA87CC78D66"), "linux-usr-s390"),
(UUID("8A4F5770-50AA-4ED3-874A-99B710DB6FEA"), "linux-usr-s390x"),
(UUID("55497029-C7C1-44CC-AA39-815ED1558630"), "linux-usr-tilegx"),
(UUID("D13C5D3B-B5D1-422A-B29F-9454FDC89D76"), "linux-root-verity-x86"),
(UUID("2C7357ED-EBD2-46D9-AEC1-23D437EC2BF5"), "linux-root-verity-x86-64"),
(UUID("FC56D9E9-E6E5-4C06-BE32-E74407CE09A5"), "linux-root-verity-alpha"),
(UUID("24B2D975-0F97-4521-AFA1-CD531E421B8D"), "linux-root-verity-arc"),
(UUID("7386CDF2-203C-47A9-A498-F2ECCE45A2D6"), "linux-root-verity-arm"),
(UUID("DF3300CE-D69F-4C92-978C-9BFB0F38D820"), "linux-root-verity-arm64"),
(UUID("86ED10D5-B607-45BB-8957-D350F23D0571"), "linux-root-verity-ia64"),
(UUID("F3393B22-E9AF-4613-A948-9D3BFBD0C535"), "linux-root-verity-loongarch64"),
(UUID("D7D150D2-2A04-4A33-8F12-16651205FF7B"), "linux-root-verity-mips32le"),
(UUID("16B417F8-3E06-4F57-8DD2-9B5232F41AA6"), "linux-root-verity-mips64le"),
(UUID("D212A430-FBC5-49F9-A983-A7FEEF2B8D0E"), "linux-root-verity-hppa"),
(UUID("98CFE649-1588-46DC-B2F0-ADD147424925"), "linux-root-verity-ppc"),
(UUID("9225A9A3-3C19-4D89-B4F6-EEFF88F17631"), "linux-root-verity-ppc64"),
(UUID("906BD944-4589-4AAE-A4E4-DD983917446A"), "linux-root-verity-ppc64le"),
(UUID("AE0253BE-1167-4007-AC68-43926C14C5DE"), "linux-root-verity-riscv32"),
(UUID("B6ED5582-440B-4209-B8DA-5FF7C419EA3D"), "linux-root-verity-riscv64"),
(UUID("7AC63B47-B25C-463B-8DF8-B4A94E6C90E1"), "linux-root-verity-s390"),
(UUID("B325BFBE-C7BE-4AB8-8357-139E652D2F6B"), "linux-root-verity-s390x"),
(UUID("966061EC-28E4-4B2E-B4A5-1F0A825A1D84"), "linux-root-verity-tilegx"),
(UUID("8F461B0D-14EE-4E81-9AA9-049B6FB97ABD"), "linux-usr-verity-x86"),
(UUID("77FF5F63-E7B6-4633-ACF4-1565B864C0E6"), "linux-usr-verity-x86-64"),
(UUID("8CCE0D25-C0D0-4A44-BD87-46331BF1DF67"), "linux-usr-verity-alpha"),
(UUID("FCA0598C-D880-4591-8C16-4EDA05C7347C"), "linux-usr-verity-arc"),
(UUID("C215D751-7BCD-4649-BE90-6627490A4C05"), "linux-usr-verity-arm"),
(UUID("6E11A4E7-FBCA-4DED-B9E9-E1A512BB664E"), "linux-usr-verity-arm64"),
(UUID("6A491E03-3BE7-4545-8E38-83320E0EA880"), "linux-usr-verity-ia64"),
(UUID("F46B2C26-59AE-48F0-9106-C50ED47F673D"), "linux-usr-verity-loongarch64"),
(UUID("46B98D8D-B55C-4E8F-AAB3-37FCA7F80752"), "linux-usr-verity-mips32le"),
(UUID("3C3D61FE-B5F3-414D-BB71-8739A694A4EF"), "linux-usr-verity-mips64le"),
(UUID("5843D618-EC37-48D7-9F12-CEA8E08768B2"), "linux-usr-verity-hppa"),
(UUID("DF765D00-270E-49E5-BC75-F47BB2118B09"), "linux-usr-verity-ppc"),
(UUID("BDB528A5-A259-475F-A87D-DA53FA736A07"), "linux-usr-verity-ppc64"),
(UUID("EE2B9983-21E8-4153-86D9-B6901A54D1CE"), "linux-usr-verity-ppc64le"),
(UUID("CB1EE4E3-8CD0-4136-A0A4-AA61A32E8730"), "linux-usr-verity-riscv32"),
(UUID("8F1056BE-9B05-47C4-81D6-BE53128E5B54"), "linux-usr-verity-riscv64"),
(UUID("B663C618-E7BC-4D6D-90AA-11B756BB1797"), "linux-usr-verity-s390"),
(UUID("31741CC4-1A2A-4111-A581-E00B447D2D06"), "linux-usr-verity-s390x"),
(UUID("2FB4BF56-07FA-42DA-8132-6B139F2026AE"), "linux-usr-verity-tilegx"),
(UUID("5996FC05-109C-48DE-808B-23FA0830B676"), "linux-root-verity-sign-x86"),
(UUID("41092B05-9FC8-4523-994F-2DEF0408B176"), "linux-root-verity-sign-x86-64"),
(UUID("D46495B7-A053-414F-80F7-700C99921EF8"), "linux-root-verity-sign-alpha"),
(UUID("143A70BA-CBD3-4F06-919F-6C05683A78BC"), "linux-root-verity-sign-arc"),
(UUID("42B0455F-EB11-491D-98D3-56145BA9D037"), "linux-root-verity-sign-arm"),
(UUID("6DB69DE6-29F4-4758-A7A5-962190F00CE3"), "linux-root-verity-sign-arm64"),
(UUID("E98B36EE-32BA-4882-9B12-0CE14655F46A"), "linux-root-verity-sign-ia64"),
(UUID("5AFB67EB-ECC8-4F85-AE8E-AC1E7C50E7D0"), "linux-root-verity-sign-loongarch64"),
(UUID("C919CC1F-4456-4EFF-918C-F75E94525CA5"), "linux-root-verity-sign-mips32le"),
(UUID("904E58EF-5C65-4A31-9C57-6AF5FC7C5DE7"), "linux-root-verity-sign-mips64le"),
(UUID("15DE6170-65D3-431C-916E-B0DCD8393F25"), "linux-root-verity-sign-hppa"),
(UUID("1B31B5AA-ADD9-463A-B2ED-BD467FC857E7"), "linux-root-verity-sign-ppc"),
(UUID("F5E2C20C-45B2-4FFA-BCE9-2A60737E1AAF"), "linux-root-verity-sign-ppc64"),
(UUID("D4A236E7-E873-4C07-BF1D-BF6CF7F1C3C6"), "linux-root-verity-sign-ppc64le"),
(UUID("3A112A75-8729-4380-B4CF-764D79934448"), "linux-root-verity-sign-riscv32"),
(UUID("EFE0F087-EA8D-4469-821A-4C2A96A8386A"), "linux-root-verity-sign-riscv64"),
(UUID("3482388E-4254-435A-A241-766A065F9960"), "linux-root-verity-sign-s390"),
(UUID("C80187A5-73A3-491A-901A-017C3FA953E9"), "linux-root-verity-sign-s390x"),
(UUID("B3671439-97B0-4A53-90F7-2D5A8F3AD47B"), "linux-root-verity-sign-tilegx"),
(UUID("974A71C0-DE41-43C3-BE5D-5C5CCD1AD2C0"), "linux-usr-verity-sign-x86"),
(UUID("E7BB33FB-06CF-4E81-8273-E543B413E2E2"), "linux-usr-verity-sign-x86-64"),
(UUID("5C6E1C76-076A-457A-A0FE-F3B4CD21CE6E"), "linux-usr-verity-sign-alpha"),
(UUID("94F9A9A1-9971-427A-A400-50CB297F0F35"), "linux-usr-verity-sign-arc"),
(UUID("D7FF812F-37D1-4902-A810-D76BA57B975A"), "linux-usr-verity-sign-arm"),
(UUID("C23CE4FF-44BD-4B00-B2D4-B41B3419E02A"), "linux-usr-verity-sign-arm64"),
(UUID("8DE58BC2-2A43-460D-B14E-A76E4A17B47F"), "linux-usr-verity-sign-ia64"),
(UUID("B024F315-D330-444C-8461-44BBDE524E99"), "linux-usr-verity-sign-loongarch64"),
(UUID("3E23CA0B-A4BC-4B4E-8087-5AB6A26AA8A9"), "linux-usr-verity-sign-mips32le"),
(UUID("F2C2C7EE-ADCC-4351-B5C6-EE9816B66E16"), "linux-usr-verity-sign-mips64le"),
(UUID("450DD7D1-3224-45EC-9CF2-A43A346D71EE"), "linux-usr-verity-sign-hppa"),
(UUID("7007891D-D371-4A80-86A4-5CB875B9302E"), "linux-usr-verity-sign-ppc"),
(UUID("0B888863-D7F8-4D9E-9766-239FCE4D58AF"), "linux-usr-verity-sign-ppc64"),
(UUID("C8BFBD1E-268E-4521-8BBA-BF314C399557"), "linux-usr-verity-sign-ppc64le"),
(UUID("C3836A13-3137-45BA-B583-B16C50FE5EB4"), "linux-usr-verity-sign-riscv32"),
(UUID("D2F9000A-7A18-453F-B5CD-4D32F77A7B32"), "linux-usr-verity-sign-riscv64"),
(UUID("17440E4F-A8D0-467F-A46E-3912AE6EF2C5"), "linux-usr-verity-sign-s390"),
(UUID("3F324816-667B-46AE-86EE-9B0C0C6C11B4"), "linux-usr-verity-sign-s390x"),
(UUID("4EDE75E2-6CCC-4CC8-B9C7-70334B087510"), "linux-usr-verity-sign-tilegx"),
(UUID("BC13C2FF-59E6-4262-A352-B275FD6F7172"), "linux-extended-boot"),
(UUID("773f91ef-66d4-49b5-bd83-d683bf40ad16"), "linux-home"),
(UUID("516E7CB4-6ECF-11D6-8FF8-00022D09712B"), "freebsd-data"),
(UUID("83BD6B9D-7F41-11DC-BE0B-001560B84F0F"), "freebsd-boot"),
(UUID("516E7CB5-6ECF-11D6-8FF8-00022D09712B"), "freebsd-swap"),
(UUID("516E7CB6-6ECF-11D6-8FF8-00022D09712B"), "freebsd-ufs"),
(UUID("516E7CBA-6ECF-11D6-8FF8-00022D09712B"), "freebsd-zfs"),
(UUID("516E7CB8-6ECF-11D6-8FF8-00022D09712B"), "freebsd-vinum"),
(UUID("48465300-0000-11AA-AA11-00306543ECAC"), "apple-hfs"),
(UUID("7C3457EF-0000-11AA-AA11-00306543ECAC"), "apple-apfs"),
(UUID("55465300-0000-11AA-AA11-00306543ECAC"), "apple-ufs"),
(UUID("52414944-0000-11AA-AA11-00306543ECAC"), "apple-raid"),
(UUID("52414944-5F4F-11AA-AA11-00306543ECAC"), "apple-raid-offline"),
(UUID("426F6F74-0000-11AA-AA11-00306543ECAC"), "apple-boot"),
(UUID("4C616265-6C00-11AA-AA11-00306543ECAC"), "apple-label"),
(UUID("5265636F-7665-11AA-AA11-00306543ECAC"), "apple-tv-recovery"),
(UUID("53746F72-6167-11AA-AA11-00306543ECAC"), "apple-core-storage"),
(UUID("69646961-6700-11AA-AA11-00306543ECAC"), "apple-silicon-boot"),
(UUID("52637672-7900-11AA-AA11-00306543ECAC"), "apple-silicon-recovery"),
(UUID("6A82CB45-1DD2-11B2-99A6-080020736631"), "solaris-boot"),
(UUID("6A85CF4D-1DD2-11B2-99A6-080020736631"), "solaris-root"),
(UUID("6A898CC3-1DD2-11B2-99A6-080020736631"), "solaris-usr"),
(UUID("6A87C46F-1DD2-11B2-99A6-080020736631"), "solaris-swap"),
(UUID("6A8B642B-1DD2-11B2-99A6-080020736631"), "solaris-backup"),
(UUID("6A8EF2E9-1DD2-11B2-99A6-080020736631"), "solaris-var"),
(UUID("6A90BA39-1DD2-11B2-99A6-080020736631"), "solaris-home"),
(UUID("49F48D32-B10E-11DC-B99B-0019D1879648"), "netbsd-swap"),
(UUID("49F48D5A-B10E-11DC-B99B-0019D1879648"), "netbsd-ffs"),
(UUID("49F48D82-B10E-11DC-B99B-0019D1879648"), "netbsd-lfs"),
(UUID("2DB519C4-B10F-11DC-B99B-0019D1879648"), "netbsd-concatenated"),
(UUID("2DB519EC-B10F-11DC-B99B-0019D1879648"), "netbsd-encrypted"),
(UUID("49F48DAA-B10E-11DC-B99B-0019D1879648"), "netbsd-raid"),
(UUID("FE3A2A5D-4F32-41A7-B725-ACCC3285A309"), "chromeos-kernel"),
(UUID("3CB8E202-3B7E-47DD-8A3C-7FF2A13CFCEC"), "chromeos-rootfs"),
(UUID("2E0A753D-9E48-43B0-8337-B15192CB1B5E"), "chromeos-reserved"),
(UUID("CAB6E88E-ABF3-4102-A07A-D4BB9BE3C1D3"), "chromeos-firmware"),
(UUID("09845860-705F-4BB5-B16C-8A8A099CAF52"), "chromeos-minios"),
(UUID("3F0F8318-F146-4E6B-8222-C28C8F02E0D5"), "chromeos-hibernate"),
(UUID("45B0969E-9B03-4F30-B4C6-B4B80CEFF106"), "ceph-journal"),
(UUID("45B0969E-9B03-4F30-B4C6-5EC00CEFF106"), "ceph-encrypted-journal"),
(UUID("4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D"), "ceph-osd"),
(UUID("4FBD7E29-9D25-41B8-AFD0-5EC00CEFF05D"), "ceph-crypt-osd"),
(UUID("AA31E02A-400F-11DB-9590-000C2911D1B8"), "vmware-vmfs"),
(UUID("9D275380-40AD-11DB-BF97-000C2911D1B8"), "vmware-diagnostic"),
(UUID("381CFCCC-7288-11E0-92EE-000C2911D0B2"), "vmware-vsan"),
(UUID("77719A0C-A4A0-11E3-A47E-000C29745A24"), "vmware-virsto"),
(UUID("9198EFFC-31C0-11DB-8F78-000C2911D1B8"), "vmware-reserved"),
(UUID("824CC7A0-36A8-11E3-890A-952519AD3F61"), "openbsd-data"),
(UUID("3DE21764-95BD-54BD-A5C3-4ABE786F38A8"), "uboot-env"),
]

View File

@ -0,0 +1,118 @@
import ctypes
from binascii import crc32
from logging import getLogger
from builder.lib.serializable import Serializable, SerializableDict
from uuid import UUID, uuid4
log = getLogger(__name__)
class EfiTableHeader(ctypes.Structure, SerializableDict):
_fields_ = [
("signature", ctypes.c_uint64),
("revision", ctypes.c_uint32),
("header_size", ctypes.c_uint32),
("crc32", ctypes.c_uint32),
("reserved", ctypes.c_uint32),
]
def set_signature(self, value: str | int | bytes):
vt = type(value)
if vt is str: r = int.from_bytes(value.encode(), "little")
elif vt is bytes: r = int.from_bytes(value, "little")
elif vt is int: r = value
else: raise TypeError("bad value type")
self.signature = r
def get_signature(self) -> bytes:
return ctypes.string_at(ctypes.byref(self), 8)
def get_revision(self) -> tuple[int, int]:
return (
self.revision >> 0x10 & 0xFFFF,
self.revision & 0xFFFF,
)
def calc_crc32(self, data: bytes = None) -> int:
orig = self.crc32
self.crc32 = 0
if data is None: data = ctypes.string_at(
ctypes.byref(self), self.header_size
)
value = crc32(data, 0)
self.crc32 = orig
return value
def update_crc32(self, data: bytes = None):
self.crc32 = self.calc_crc32(data)
def check_signature(self, value: str | int | bytes) -> bool:
vt = type(value)
if vt is int: return self.signature == value
b = self.get_signature()
if vt is bytes: return b == value
if vt is str: return b == value.encode()
raise TypeError("bad value type")
def check_revision(self, major: int, minor: int) -> bool:
rev = self.get_revision()
return rev[0] == major and rev[1] == minor
def check_crc32(self) -> bool:
return self.calc_crc32() == self.crc32
def to_dict(self) -> dict:
return {
"signature": self.get_signature().decode(),
"revision": ".".join(map(str, self.get_revision())),
"header_size": self.header_size,
"crc32": self.crc32,
}
class EfiGUID(ctypes.Structure, Serializable):
_fields_ = [
("d1", ctypes.c_uint32),
("d2", ctypes.c_uint16),
("d3", ctypes.c_uint16),
("d4", ctypes.c_uint8 * 8),
]
def to_uuid(self) -> UUID:
u = bytes()
u += int.to_bytes(self.d1, 4)
u += int.to_bytes(self.d2, 2)
u += int.to_bytes(self.d3, 2)
u += bytes().join(int.to_bytes(i) for i in self.d4)
return UUID(bytes=u)
def set_uuid(self, u: UUID):
u = u.bytes
self.d1 = int.from_bytes(u[0:4])
self.d2 = int.from_bytes(u[4:6])
self.d3 = int.from_bytes(u[6:8])
for i in range(8):
self.d4[i] = int.from_bytes(u[i+8:i+9])
@staticmethod
def from_uuid(u: UUID):
if u is None: return None
g = EfiGUID()
g.set_uuid(u)
return g
@staticmethod
def generate():
return EfiGUID.from_uuid(uuid4())
def serialize(self) -> str:
return str(self.to_uuid())
def unserialize(self, o: str):
self.from_uuid(UUID(o))
def __str__(self) -> str:
return self.serialize()
assert(ctypes.sizeof(EfiTableHeader()) == 24)
assert(ctypes.sizeof(EfiGUID()) == 16)

View File

@ -0,0 +1,18 @@
import ctypes
BLKSSZGET = 0x1268
BLKGETSIZE64 = 0x80081272
HDIO_GETGEO = 0x0301
class HDGeometry(ctypes.Structure):
_fields_ = [
("heads", ctypes.c_ubyte),
("sectors", ctypes.c_ubyte),
("cylinders", ctypes.c_ushort),
("start", ctypes.c_ulong),
]
assert(ctypes.sizeof(HDGeometry()) == 16)

View File

@ -0,0 +1,140 @@
from math import ceil
from io import RawIOBase
from logging import getLogger
from builder.lib.utils import size_to_bytes
from builder.lib.serializable import SerializableDict
from builder.lib.area import Area
from builder.disk.layout.dio import DiskIO
from builder.disk.layout.area import DiskArea
from builder.disk.layout.part import DiskPart
log = getLogger(__name__)
class DiskLayout(DiskIO, DiskArea, SerializableDict):
partitions: list[DiskPart]
loaded: bool
@property
def id(self) -> str: pass
@id.setter
def id(self, val: str): pass
def create(self): pass
def reload(self): pass
def unload(self): pass
def save(self): pass
def set_from(self, config: dict): pass
def size_to_bytes(self, value: str | int, alt_units: dict = None) -> int:
units = {
"s": self.sector,
"sector": self.sector,
"sectors": self.sector
}
if alt_units:
units.update(alt_units)
return size_to_bytes(value, units)
def size_to_sectors(self, value: str | int, alt_units: dict = None) -> int:
ret = self.size_to_bytes(value, alt_units)
return ceil(ret / self.sector)
def _parse_area(self, config: dict) -> Area:
start, end, size = -1, -1, -1
if "start" in config: start = self.size_to_sectors(config["start"])
if "offset" in config: start = self.size_to_sectors(config["offset"])
if "end" in config: end = self.size_to_sectors(config["end"])
if "size" in config: size = self.size_to_sectors(config["size"])
if "length" in config: size = self.size_to_sectors(config["length"])
if "start_lba" in config: start = config["start_lba"]
if "offset_lba" in config: start = config["offset_lba"]
if "end_lba" in config: end = config["end_lba"]
if "size_lba" in config: size = config["size_lba"]
if "length_lba" in config: size = config["length_lba"]
return Area(start, end, size)
def parse_area(self, config: dict) -> Area:
area = self._parse_area(config)
area.fixup()
return area
def parse_free_area(self, config: dict) -> Area:
return self.find_free_area(area=self._parse_area(config))
def resort_partitions(self):
self.partitions.sort(key=lambda p: p.start_lba)
idx = 0
for part in self.partitions:
part.idx = idx
idx += 1
def add_partition_from(self, config: dict) -> DiskPart:
area = self.parse_free_area(config)
if area is None: raise ValueError("no free area found")
ptype = config["ptype"] if "ptype" in config else "linux"
return self.add_partition(ptype, area=area)
def del_partition(self, part: DiskPart):
if part not in self.partitions:
if part.layout == self:
raise ValueError("removed partition")
raise KeyError("partition not found")
self.partitions.remove(part)
def add_partition(
self,
ptype: str = None,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None,
) -> DiskPart: pass
def __init__(
self,
fp: RawIOBase = None,
path: str = None,
sector: int = 512
):
DiskIO.__init__(self)
self.partitions = []
self.loaded = False
self._sector = sector
if sector < self._min_sector:
raise ValueError("bad sector size")
if fp: self._fp = fp
elif path:
self._fp = open(path, "wb+")
self._opened = True
else: raise ValueError("no I/O interface")
def __del__(self):
if self._opened: self._fp.close()
def __len__(self) -> int:
return len(self.partitions)
def __setitem__(self, key: int, value: DiskPart):
self.resort_partitions()
self.partitions[key] = value
def __getitem__(self, key: int) -> DiskPart:
self.resort_partitions()
return self.partitions[key]
def __delitem__(self, key: int):
self.resort_partitions()
self.del_partition(self.partitions[key])
self.resort_partitions()
def __iadd__(self, value: DiskPart) -> DiskPart:
self.resort_partitions()
value.idx = len(self.partitions)
self.partitions += value
self.resort_partitions()
return value

View File

@ -0,0 +1,270 @@
from os import urandom
from io import RawIOBase
from logging import getLogger
from builder.lib.area import Area, Areas
from builder.disk.layout.layout import DiskLayout
from builder.disk.layout.mbr.types import DiskTypesMBR
from builder.disk.layout.mbr.struct import MasterBootRecord, MbrPartEntry
from builder.disk.layout.mbr.part import DiskPartMBR
log = getLogger(__name__)
class DiskLayoutMBR(DiskLayout):
boot_code: bytes
mbr_id: int
partitions: list[DiskPartMBR]
@property
def id(self) -> str:
return f"{self.mbr_id:08x}"
@id.setter
def id(self, val: str):
self.mbr_id = int(val, base=16)
def del_partition(self, part: DiskPartMBR):
DiskLayout.del_partition(self, part)
if DiskTypesMBR.equal(part, "extended") and not part.logical:
parts = [p for p in self.partitions if p.extend == part]
for p in parts: self.del_partition(p)
def add_partition(
self,
ptype: int | str = None,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None,
logical: bool | None = None,
) -> DiskPartMBR | None:
area = self.find_free_area(start, end, size, area)
if area is None: return None
if ptype is None: ptype = "linux"
extend: DiskPartMBR | None = None
ps = self.partitions
tid = DiskTypesMBR.lookup_one_id("extended")
primary: list[DiskPartMBR] = [p for p in ps if not p.logical]
extended: list[DiskPartMBR] = [p for p in ps if p.type_id == tid]
if logical is None: logical = len(primary) >= 4
if logical:
if extended is None: raise RuntimeError("no extended table")
if DiskTypesMBR.equal(ptype, "extended"):
raise ValueError("attempt add extended table as logical")
extend = next((e.to_area().is_area_in(area) for e in extended), None)
if extend is None: raise ValueError(
"logical partition out of extended table"
)
elif len(primary) >= 4:
raise ValueError("no space for primary partition")
self.resort_partitions()
idx = len(self.partitions)
item = MbrPartEntry()
item.set_start_lba(area.start)
item.set_size_lba(area.end)
item.set_type(ptype)
part = DiskPartMBR(self, item, idx)
if logical: part.extend = extend
self.partitions.insert(idx, part)
pl = "logical" if logical else "primary"
log.debug(
f"Added {pl} partition {idx} "
f"start LBA {area.start} "
f"end LBA {area.end} "
f"type {ptype}"
)
return part
def add_partition_from(self, config: dict) -> DiskPartMBR:
area = self.parse_free_area(config)
if area is None: raise ValueError("no free area found")
ptype = config["ptype"] if "ptype" in config else None
logical = config["logical"] if "logical" in config else None
part = self.add_partition(ptype, area=area, logical=logical)
if part:
if "bootable" in config:
part.bootable = config["bootable"]
return part
def get_usable_area(self) -> Area | None:
if self.total_lba <= 2: return None
end = self.total_lba - 1
start = min(self.align_lba, end)
return Area(start=start, end=end).fixup()
def get_used_areas(self, table=False) -> Areas:
areas = Areas()
if table: areas.add(start=1, size=1)
for part in self.partitions:
if part.size_lba <= 0: continue
start, size = part.start_lba, part.size_lba
if part.logical and table: size += 1
if DiskTypesMBR.equal(part, "extended"):
if not table: continue
size = 1
areas.add(start=start, size=size)
areas.merge()
return areas
def get_free_areas(self) -> Areas:
areas = Areas()
usable = self.get_usable_area()
if usable is None: return areas
areas.add(area=usable)
for part in self.partitions:
start = part.start_lba
end = part.end_lba
size = part.size_lba
if DiskTypesMBR.equal(part, "extended"):
end = -1
size = 1
elif part.logical:
end += 1
size += 1
areas.splice(start, end, size)
areas.align(self.align_lba)
return areas
def create_mbr(self) -> MasterBootRecord:
new_mbr = MasterBootRecord()
new_mbr.fill_header()
if self.boot_code:
new_mbr.boot_code = self.boot_code
new_mbr.mbr_id = self.mbr_id
idx = 0
for part in self.partitions:
if part.logical: continue
if idx >= 4: raise RuntimeError("too many primary partitions")
new_mbr.partitions[idx] = part.to_entry()
idx += 1
return new_mbr
def try_load_mbr(self) -> MasterBootRecord | None:
mbr_data = self.read_lba(0)
mbr = MasterBootRecord.from_buffer_copy(mbr_data)
if not mbr.check_signature():
log.debug("Bad MBR")
return None
self.mbr_id = mbr.mbr_id
self.boot_code = mbr.boot_code
log.debug(f"Found MBR id {self.id}")
return mbr
def try_load_mbr_extended_entries(self, ext: DiskPartMBR) -> list[DiskPartMBR] | None:
extends: list[DiskPartMBR] = []
ebr_data = self.read_lba(ext.start_lba)
ebr = MasterBootRecord.from_buffer_copy(ebr_data)
if not ebr.check_signature():
if ebr.signature == 0:
log.debug("Empty EBR")
return extends
log.debug("Bad EBR")
return None
for item in ebr.partitions:
idx = len(self.partitions)
part = DiskPartMBR(self, item, idx)
if part.size_lba == 0: continue
part.extend = ext.get_root_ebr()
part.logical = True
if DiskTypesMBR.equal(part.type_id, "extended"):
part.start_lba += part.extend.start_lba
extends.append(part)
else:
part.start_lba += ext.start_lba
self.partitions.insert(idx, part)
log.debug(
f"Found logical partition {idx} "
f"start LBA {part.start_lba} "
f"end LBA {part.end_lba} "
f"type {part.type}"
)
return extends
def try_load_mbr_entries(self, mbr: MasterBootRecord) -> bool:
ret = True
nested: list[DiskPartMBR] = []
extends: list[DiskPartMBR] = []
self.partitions.clear()
log.debug("Try loading MBR primary partitions")
for item in mbr.partitions:
if item.size_lba == 0: continue
idx = len(self.partitions)
part = DiskPartMBR(self, item, idx)
if DiskTypesMBR.equal(part.type_id, "extended"):
extends.append(part)
self.partitions.insert(idx, part)
log.debug(
f"Found primary partition {idx} "
f"start LBA {part.start_lba} "
f"end LBA {part.end_lba} "
f"type {part.type}"
)
while len(extends) > 0:
for extend in extends:
log.debug(
"Try loading MBR logical partitions from "
f"LBA {extend.start_lba}"
)
ne = self.try_load_mbr_extended_entries(extend)
if ne is None: ret = False
else: nested.extend(ne)
extends = nested
nested = []
cnt = len(self.partitions)
if ret: log.debug(f"Found {cnt} partitions")
return ret
def create_ebr_chains(self) -> dict:
pass
def load_header(self) -> bool:
self.unload()
mbr = self.try_load_mbr()
if mbr is None: return False
if not self.try_load_mbr_entries(mbr): return False
self.loaded = True
return True
def unload(self):
self.loaded = False
self.mbr_id = 0
self.boot_code = bytes()
self.partitions.clear()
def reload(self):
if self.load_header(): return
raise IOError("Load MBR header failed")
def save(self):
pass
def create(self):
self.unload()
self.mbr_id = int.from_bytes(urandom(4))
def set_from(self, config: dict):
if "id" in config: self.mbr_id = int(config["id"])
def to_dict(self) -> dict:
return {
"id": self.id,
"mbr_id": self.mbr_id,
"sector": self.sector,
"sectors": self.total_lba,
"size": self.total_size,
"free": self.get_free_size(),
"partitions": self.partitions,
"usable_area": self.get_usable_area(),
"free_area": self.get_free_areas(),
}
def __init__(
self,
fp: RawIOBase = None,
path: str = None,
sector: int = 512
):
super().__init__(fp=fp, path=path, sector=sector)
self.partitions = []
self.mbr_id = 0
self.boot_code = bytes()
self.load_header()

View File

@ -0,0 +1,117 @@
from logging import getLogger
from typing import Self
from builder.disk.layout.layout import DiskLayout, DiskPart
from builder.disk.layout.mbr.struct import MbrPartEntry
from builder.disk.layout.mbr.types import DiskTypesMBR
log = getLogger(__name__)
class DiskPartMBR(DiskPart):
layout: DiskLayout
boot_indicator: int
os_indicator: int
idx: int
logical: bool
extend: Self
_start_lba: int
_size_lba: int
def get_root_ebr(self) -> Self:
ebr = self
while ebr.logical:
ebr = ebr.extend
return ebr
@property
def bootable(self) -> bool:
return self.boot_indicator == 0x80
@bootable.setter
def bootable(self, bootable: bool):
self.boot_indicator = 0x80 if bootable else 0
@property
def id(self) -> str:
return f"{self.layout.id}-{self.idx+1}"
@id.setter
def id(self, val: str):
raise NotImplementedError("cannot change id of mbr part")
@property
def type_id(self) -> int:
return self.os_indicator
@type_id.setter
def type_id(self, tid: int):
self.type = tid
@property
def type(self) -> str:
return DiskTypesMBR.lookup_one_name(self.os_indicator)
@type.setter
def type(self, tid: str):
g = DiskTypesMBR.lookup_one_id(tid)
if g is None: raise ValueError(f"bad type {tid}")
self.os_indicator = g
@property
def start_lba(self) -> int: return self._start_lba
@start_lba.setter
def start_lba(self, start_lba: int): self._start_lba = start_lba
@property
def size_lba(self) -> int: return self._size_lba
@size_lba.setter
def size_lba(self, size_lba: int): self._size_lba = size_lba
@property
def end_lba(self) -> int:
return self.size_lba + self.start_lba - 1
@end_lba.setter
def end_lba(self, end_lba: int):
self.size_lba = end_lba - self.start_lba + 1
def load_entry(self, part: MbrPartEntry):
self.start_lba = part.start_lba
self.size_lba = part.size_lba
self.boot_indicator = part.boot_indicator
self.os_indicator = part.os_indicator
def to_entry(self) -> MbrPartEntry:
part = MbrPartEntry()
part.start_lba = self.start_lba
part.size_lba = self.size_lba
part.boot_indicator = self.boot_indicator
part.os_indicator = self.os_indicator
return part
def __init__(self, layout: DiskLayout, part: MbrPartEntry, idx: int):
super().__init__()
self.layout = layout
self.idx = idx
self.start_lba = 0
self.size_lba = 0
self.boot_indicator = 0
self.os_indicator = 0
self.logical = False
self.extend = None
if part: self.load_entry(part)
from builder.disk.layout.mbr.layout import DiskLayoutMBR
if not isinstance(layout, DiskLayoutMBR):
raise TypeError("require DiskLayoutGPT")
def to_dict(self) -> dict:
return {
"logical": self.logical,
"bootable": self.bootable,
"type_id": self.type_id,
"type_name": self.type,
"start_lba": self.start_lba,
"end_lba": self.end_lba,
"size_lba": self.size_lba,
}

View File

@ -0,0 +1,106 @@
import ctypes
from logging import getLogger
from builder.lib.serializable import SerializableDict
from builder.disk.layout.mbr.types import DiskTypesMBR
log = getLogger(__name__)
class MbrPartEntry(ctypes.Structure, SerializableDict):
_fields_ = [
("boot_indicator", ctypes.c_uint8),
("start_head", ctypes.c_uint8),
("start_sector", ctypes.c_uint8),
("start_track", ctypes.c_uint8),
("os_indicator", ctypes.c_uint8),
("end_head", ctypes.c_uint8),
("end_sector", ctypes.c_uint8),
("end_track", ctypes.c_uint8),
("start_lba", ctypes.c_uint32),
("size_lba", ctypes.c_uint32),
]
heads: int=255
sectors: int=63
def is_bootable(self) -> bool:
return self.boot_indicator == 0x80
def set_bootable(self, bootable: bool):
self.boot_indicator = 0x80 if bootable else 0
def get_type_name(self) -> str:
return DiskTypesMBR.lookup_one_name(self.os_indicator)
def get_type_id(self) -> int:
return DiskTypesMBR.lookup_one_id(self.os_indicator)
def set_type(self, t: int|str):
g = DiskTypesMBR.lookup_one_id(t)
if g is None: raise ValueError(f"bad type {t}")
self.os_indicator = g
def set_start_lba(self, start_lba: int):
c, h, s = lba_to_chs(start_lba, self.sectors, self.heads)
self.start_head = h
self.start_sector = s
self.start_track = c
self.start_lba = start_lba
def set_end_lba(self, end_lba: int):
c, h, s = lba_to_chs(end_lba, self.sectors, self.heads)
self.end_head = h
self.end_sector = s
self.end_track = c
self.size_lba = end_lba - self.start_lba + 1
def set_size_lba(self, size_lba: int):
end_lba = size_lba + self.start_lba - 1
c, h, s = lba_to_chs(end_lba, self.sectors, self.heads)
self.end_head = h
self.end_sector = s
self.end_track = c
self.size_lba = size_lba
def to_dict(self) -> dict:
ret = {field[0]: getattr(self, field[0]) for field in self._fields_}
ret["bootable"] = self.is_bootable()
ret["type_id"] = self.get_type_id()
ret["type_name"] = self.get_type_name()
return ret
class MasterBootRecord(ctypes.Structure, SerializableDict):
_pack_ = 1
_fields_ = [
("boot_code", ctypes.c_byte * 440),
("mbr_id", ctypes.c_uint32),
("reserved", ctypes.c_uint16),
("partitions", MbrPartEntry * 4),
("signature", ctypes.c_uint16),
]
MBR_SIGNATURE: int = 0xaa55
def fill_header(self):
self.signature = self.MBR_SIGNATURE
def check_signature(self) -> bool:
return self.signature == self.MBR_SIGNATURE
def to_dict(self) -> dict:
parts = [part for part in self.partitions if part.os_indicator != 0]
return {
"mbr_id": f"{self.mbr_id:08x}",
"partitions": parts,
"signature": self.signature,
}
assert(ctypes.sizeof(MbrPartEntry()) == 16)
assert(ctypes.sizeof(MasterBootRecord()) == 512)
def lba_to_chs(lba: int, sectors: int = 63, heads: int = 255):
lba += 1
sector = lba % sectors
head = (lba // sectors) % heads
cylinder = lba // (sectors * heads)
return cylinder, head, sector

View File

@ -0,0 +1,82 @@
from logging import getLogger
from builder.disk.layout.types import DiskTypes
log = getLogger(__name__)
class DiskTypesMBR(DiskTypes):
@staticmethod
def lookup(t) -> list[tuple[int, str]]:
ret: list[tuple[int, str]] = []
ts = DiskTypesMBR.types
from builder.disk.layout.mbr.struct import MbrPartEntry
from builder.disk.layout.mbr.part import DiskPartMBR
if isinstance(t, DiskPartMBR):
u = t.type_id
elif isinstance(t, MbrPartEntry):
u = int(t.os_indicator)
elif type(t) is int:
u = t
elif type(t) is str:
ret = [tn for tn in ts if tn[1] == t]
if len(ret) > 0: return ret
try: u = int(t)
except: return ret
else: return ret
return [tn for tn in ts if tn[0] == u]
def lookup_one(t) -> tuple[int, str]:
l = DiskTypesMBR.lookup(t)
return l[0] if len(l) > 0 else None
@staticmethod
def lookup_one_id(t) -> int:
r = DiskTypesMBR.lookup_one(t)
return r[0] if r else 0
@staticmethod
def lookup_one_name(t) -> str:
r = DiskTypesMBR.lookup_one(t)
return r[1] if r else None
@staticmethod
def lookup_names(t) -> list[str]:
r = DiskTypesMBR.lookup(t)
return [t[1] for t in r]
@staticmethod
def equal(l, r) -> bool:
lf = DiskTypesMBR.lookup_one_id(l)
rf = DiskTypesMBR.lookup_one_id(r)
if lf == 0 or rf == 0: return False
return lf == rf
types: list[tuple[int, str]] = [
(0x01, "fat12"),
(0x05, "extended"),
(0x06, "fat16"),
(0x07, "ntfs"),
(0x07, "exfat"),
(0x07, "hpfs"),
(0x0b, "fat32"),
(0x16, "hidden-fat16"),
(0x17, "hidden-ntfs"),
(0x17, "hidden-exfat"),
(0x17, "hidden-hpfs"),
(0x1b, "hidden-fat32"),
(0x81, "minix"),
(0x82, "linux-swap"),
(0x83, "linux"),
(0x85, "linux-extended"),
(0x85, "linuxex"),
(0x88, "linux-plaintext"),
(0x8e, "linux-lvm"),
(0xa5, "freebsd"),
(0xa6, "openbsd"),
(0xa9, "netbsd"),
(0xaf, "hfs"),
(0xee, "gpt"),
(0xef, "efi"),
(0xef, "uefi"),
(0xef, "esp"),
(0xfd, "linux-raid"),
]

View File

@ -0,0 +1,95 @@
from logging import getLogger
from builder.lib.serializable import SerializableDict
from builder.lib.area import Area
log = getLogger(__name__)
class DiskPart(SerializableDict):
layout = None
idx: int
@property
def part_name(self) -> str: pass
@part_name.setter
def part_name(self, name: str): pass
@property
def type(self) -> str: pass
@type.setter
def type(self, val: str): pass
@property
def id(self) -> str: pass
@id.setter
def id(self, val: str): pass
@property
def start_lba(self) -> int: pass
@start_lba.setter
def start_lba(self, start_lba: int): pass
@property
def end_lba(self) -> int: pass
@end_lba.setter
def end_lba(self, end_lba: int): pass
@property
def size_lba(self) -> int: pass
@size_lba.setter
def size_lba(self, size_lba: int): pass
@property
def partlabel(self) -> str: pass
@property
def partuuid(self) -> str: pass
def to_area(self) -> Area:
return Area(
self.start_lba,
self.end_lba,
self.size_lba
)
def set_area(self, start: int = -1, end: int = -1, size: int = -1, area: Area = None):
val = Area(start, end, size, area).fixup().to_tuple()
self.start_lba, self.end_lba, self.size_lba = val
def delete(self):
self.layout.del_partition(self)
@property
def attributes(self) -> int: pass
@attributes.setter
def attributes(self, attributes: int): pass
@property
def start(self) -> int:
return self.start_lba * self.layout.sector
@start.setter
def start(self, start: int):
self.start_lba = start / self.layout.sector
@property
def end(self) -> int:
return self.end_lba * self.layout.sector
@end.setter
def end(self, end: int):
self.end_lba = end / self.layout.sector
@property
def size(self) -> int:
return self.size_lba * self.layout.sector
@size.setter
def size(self, size: int):
self.size_lba = size / self.layout.sector

View File

@ -0,0 +1,33 @@
class DiskTypes:
@staticmethod
def lookup(t) -> list[tuple[int, str]]:
pass
def lookup_one(t) -> tuple[int, str]:
l = DiskTypes.lookup(t)
return l[0] if len(l) > 0 else None
@staticmethod
def lookup_one_id(t) -> int:
r = DiskTypes.lookup_one(t)
return r[0] if r else None
@staticmethod
def lookup_one_name(t) -> str:
r = DiskTypes.lookup_one(t)
return r[1] if r else None
@staticmethod
def lookup_names(t) -> list[str]:
r = DiskTypes.lookup(t)
return [t[1] for t in r]
@staticmethod
def equal(l, r) -> bool:
lf = DiskTypes.lookup_one_id(l)
rf = DiskTypes.lookup_one_id(r)
if lf is None or rf is None: return False
return lf == rf
types: list[tuple[int, str]] = []

9
builder/disk/types.py Normal file
View File

@ -0,0 +1,9 @@
from builder.disk.content import ImageContentBuilder
from builder.disk.layout.build import DiskLayoutBuilder
from builder.disk.filesystem.build import FileSystemBuilder
types: list[tuple[str, type[ImageContentBuilder]]] = [
("disk", DiskLayoutBuilder),
("filesystem", FileSystemBuilder),
]

202
builder/lib/area.py Normal file
View File

@ -0,0 +1,202 @@
from typing import Self
from builder.lib.utils import round_up, round_down, size_to_bytes
from builder.lib.serializable import SerializableDict, SerializableList
class Area(SerializableDict):
start: int = -1
end: int = -1
size: int = -1
def set(self, start: int = -1, end: int = -1, size: int = -1) -> Self:
self.start, self.end, self.size = start, end, size
return self
def to_tuple(self) -> tuple[int, int, int]:
return self.start, self.end, self.size
def to_dict(self) -> dict:
return {
"start": self.start,
"end": self.end,
"size": self.size,
}
def reset(self) -> Self:
self.set(-1, -1, -1)
return self
def from_dict(self, o: dict) -> Self:
self.reset()
if "start" in o: self.start = size_to_bytes(o["start"])
if "offset" in o: self.start = size_to_bytes(o["offset"])
if "end" in o: self.end = size_to_bytes(o["end"])
if "size" in o: self.size = size_to_bytes(o["size"])
if "length" in o: self.size = size_to_bytes(o["length"])
return self
def is_area_in(self, area: Self) -> bool:
self.fixup()
area.fixup()
return (
(self.start <= area.start <= self.end) and
(self.start <= area.end <= self.end) and
(area.size <= self.size)
)
def fixup(self) -> Self:
if self.start >= 0 and self.end >= 0 and self.start > self.end + 1:
raise ValueError("start large than end")
if 0 <= self.end < self.size and self.size >= 0:
raise ValueError("size large than end")
if self.start >= 0 and self.end >= 0 and self.size >= 0:
if self.size != self.end - self.start + 1:
raise ValueError("bad size")
elif self.start >= 0 and self.end >= 0:
self.size = self.end - self.start + 1
elif self.start >= 0 and self.size >= 0:
self.end = self.start + self.size - 1
elif self.end >= 0 and self.size >= 0:
self.start = self.end - self.size + 1
else:
raise ValueError("missing value")
return self
def __init__(self, start: int = -1, end: int = -1, size: int = -1, area: Self = None):
super().__init__()
if area: start, end, size = area.to_tuple()
self.start, self.end, self.size = start, end, size
def convert(start: int = -1, end: int = -1, size: int = -1, area: Area = None) -> Area:
return Area(start, end, size, area).fixup()
def to_tuple(start: int = -1, end: int = -1, size: int = -1, area: Area = None) -> tuple[int, int, int]:
return convert(start, end, size, area).to_tuple()
class Areas(list[Area], SerializableList):
def is_area_in(self, area: Area) -> bool:
return any(pool.is_area_in(area) for pool in self)
def merge(self) -> Self:
idx = 0
self.sort(key=lambda x: (x.start, x.end))
while len(self) > 0:
curr = self[idx]
if curr.size <= 0:
self.remove(curr)
continue
if idx > 0:
last = self[idx - 1]
if last.end + 1 >= curr.start:
ent = Area(last.start, curr.end)
ent.fixup()
self.remove(last)
self.remove(curr)
self.insert(idx - 1, ent)
idx -= 1
idx += 1
if idx >= len(self): break
return self
def lookup(
self,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None,
) -> Area | None:
start, end, size = to_tuple(start, end, size, area)
for area in self:
if not (area.start <= start <= area.end): continue
if not (area.start <= end <= area.end): continue
if size > area.size: continue
return area
return None
def align(self, align: int) -> Self:
self.sort(key=lambda x: (x.start, x.end))
for area in self:
start = round_up(area.start, align)
end = round_down(area.end + 1, align) - 1
size = end - start + 1
if start >= end or size < align:
self.remove(area)
else:
area.set(start, end, size)
self.merge()
return self
def add(
self,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None
) -> Area | None:
if area: start, end, size = area.to_tuple()
cnt = (start >= 0) + (end >= 0) + (size >= 0)
if cnt < 2: raise ValueError("missing value")
r = convert(start, end, size)
if r.size <= 0: return None
self.append(r)
return r
def splice(
self,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None,
) -> bool:
start, end, size = to_tuple(start, end, size, area)
if len(self) <= 0: return False
rs = min(area.start for area in self)
re = max(area.end for area in self)
if start < rs: start = rs
if end > re: end = re
start, end, size = to_tuple(start, end)
target = self.lookup(start, end, size)
if target is None: return False
self.remove(target)
self.add(target.start, start - 1)
self.add(end + 1, target.end)
self.merge()
return True
def find(
self,
start: int = -1,
end: int = -1,
size: int = -1,
area: Area = None,
biggest: bool = True,
) -> Area | None:
if area: start, end, size = area.to_tuple()
cnt = (start >= 0) + (end >= 0) + (size >= 0)
if cnt >= 2:
area = convert(start, end, size)
return area if self.is_area_in(area) else None
use = Areas()
for free in self:
if start >= 0 and not (free.start <= start <= free.end): continue
if end >= 0 and not (free.start <= end <= free.end): continue
if size >= 0 and size > free.size: continue
use.add(area=free)
if biggest: use.sort(key=lambda x: x.size)
if len(use) <= 0: return None
target = use[0]
if start >= 0: target.start, target.end = start, -1
if end >= 0: target.start, target.end = -1, end
if size >= 0: target.end, target.size = -1, size
return target.fixup()
def to_list(self) -> list:
return self
def from_list(self, o: list) -> Self:
self.clear()
for i in o: self.append(Area().from_dict(i))
return self

642
builder/lib/blkid.py Normal file
View File

@ -0,0 +1,642 @@
from ctypes import *
class BlkidType:
blkid = None
ptr = POINTER(c_uint64)
@property
def pptr(self): return pointer(self.ptr)
def __init__(self, blkid, ptr: c_void_p=None):
self.blkid = blkid
if ptr: self.ptr = ptr
class BlkidCache(BlkidType): pass
class BlkidProbe(BlkidType): pass
class BlkidDevice(BlkidType): pass
class BlkidDeviceIterate(BlkidType): pass
class BlkidTagIterate(BlkidType): pass
class BlkidTopology(BlkidType): pass
class BlkidPartList(BlkidType): pass
class BlkidPartTable(BlkidType): pass
class BlkidPartition(BlkidType): pass
class Blkid:
obj: CDLL=None
BLKID_DEV_FIND = 0x0000
BLKID_DEV_CREATE = 0x0001
BLKID_DEV_VERIFY = 0x0002
BLKID_DEV_NORMAL = 0x0003
BLKID_SUBLKS_LABEL = (1 << 1)
BLKID_SUBLKS_LABELRAW = (1 << 2)
BLKID_SUBLKS_UUID = (1 << 3)
BLKID_SUBLKS_UUIDRAW = (1 << 4)
BLKID_SUBLKS_TYPE = (1 << 5)
BLKID_SUBLKS_SECTYPE = (1 << 6)
BLKID_SUBLKS_USAGE = (1 << 7)
BLKID_SUBLKS_VERSION = (1 << 8)
BLKID_SUBLKS_MAGIC = (1 << 9)
BLKID_SUBLKS_BADCSUM = (1 << 10)
BLKID_SUBLKS_FSINFO = (1 << 11)
BLKID_SUBLKS_DEFAULT = ((1 << 1) | (1 << 3) | (1 << 5) | (1 << 6))
BLKID_FLTR_NOTIN = 1
BLKID_FLTR_ONLYIN = 2
BLKID_USAGE_FILESYSTEM = (1 << 1)
BLKID_USAGE_RAID = (1 << 2)
BLKID_USAGE_CRYPTO = (1 << 3)
BLKID_USAGE_OTHER = (1 << 4)
BLKID_PARTS_FORCE_GPT = (1 << 1)
BLKID_PARTS_ENTRY_DETAILS = (1 << 2)
BLKID_PARTS_MAGIC = (1 << 3)
BLKID_PROBE_OK = 0
BLKID_PROBE_NONE = 1
BLKID_PROBE_ERROR = -1
BLKID_PROBE_AMBIGUOUS = -2
def __init__(self):
self.obj = CDLL("libblkid.so.1")
def init_debug(self, mask: int) -> None:
self.obj.blkid_init_debug.argtypes = (c_int, )
self.obj.blkid_init_debug(mask)
def put_cache(self, cache: BlkidCache) -> None:
self.obj.blkid_put_cache(cache.ptr)
def get_cache(self, filename: str=None) -> tuple[int, BlkidCache]:
cache = BlkidCache(self)
self.obj.blkid_get_cache.argtypes = (c_void_p, c_char_p, )
self.obj.blkid_get_cache.restype = c_int
c = cache.ptr if cache else None
f = filename.encode() if filename else None
ret = self.obj.blkid_get_cache(c, f)
return (ret, cache)
def gc_cache(self, cache: BlkidCache) -> None:
self.obj.blkid_gc_cache.argtypes = (c_void_p, )
self.obj.blkid_gc_cache(cache.ptr)
def dev_devname(self, dev: BlkidDevice) -> str:
self.obj.blkid_dev_devname.argtypes = (c_void_p, )
self.obj.blkid_dev_devname.restype = c_char_p
ret = self.obj.blkid_dev_devname(dev.ptr)
return ret.decode() if ret else None
def dev_iterate_begin(self, cache: BlkidCache) -> BlkidDeviceIterate:
iter = BlkidDeviceIterate(self)
self.obj.blkid_dev_iterate_begin.argtypes = (c_void_p, )
self.obj.blkid_dev_iterate_begin.restype = c_void_p
iter.ptr = self.obj.blkid_dev_iterate_begin(cache.ptr)
return iter
def dev_set_search(self, iter: BlkidDeviceIterate, type: str=None, value: str=None) -> int:
self.obj.blkid_dev_set_search.argtypes = (c_void_p, c_char_p, c_char_p, )
self.obj.blkid_dev_set_search.restype = c_int
return self.obj.blkid_dev_set_search(iter, type, value)
def dev_next(self, iter: BlkidDeviceIterate) -> tuple[int, BlkidDevice]:
dev = BlkidDevice(self)
self.obj.blkid_dev_next.argtypes = (c_void_p, c_void_p, )
self.obj.blkid_dev_next.restype = c_int
ret = self.obj.blkid_dev_next(iter.ptr, dev.pptr)
return (ret, dev)
def dev_iterate_end(self, iter: BlkidDeviceIterate):
self.obj.blkid_dev_iterate_end.argtypes = (c_void_p, )
self.obj.blkid_dev_iterate_end(iter.ptr)
def devno_to_devname(self, devno: int) -> str:
self.obj.blkid_devno_to_devname.argtypes = (c_int, )
self.obj.blkid_devno_to_devname.restype = c_char_p
ret = self.obj.blkid_devno_to_devname(devno)
return ret.decode() if ret else None
def devno_to_wholedisk(self, dev: int, diskname: str, len: int) -> tuple[int, int]:
diskdevno = c_uint64(0)
ptr = pointer(diskdevno)
self.obj.blkid_devno_to_wholedisk.argtypes = (c_int, c_char_p, POINTER(c_int), )
self.obj.blkid_devno_to_wholedisk.restype = c_int
d = diskname.encode() if diskname else None
ret = self.obj.blkid_devno_to_wholedisk(dev, d, len, ptr)
return (ret, diskdevno)
def probe_all(self, cache: BlkidCache) -> int:
self.obj.blkid_probe_all.argtypes = (c_void_p, )
self.obj.blkid_probe_all.restype = c_int
return self.obj.blkid_probe_all(cache.ptr)
def probe_all_new(self, cache: BlkidCache) -> int:
self.obj.blkid_probe_all_new.argtypes = (c_void_p, )
self.obj.blkid_probe_all_new.restype = c_int
return self.obj.blkid_probe_all_new(cache.ptr)
def probe_all_removable(self, cache: BlkidCache) -> int:
self.obj.blkid_probe_all_removable.argtypes = (c_void_p, )
self.obj.blkid_probe_all_removable.restype = c_int
return self.obj.blkid_probe_all_removable(cache.ptr)
def get_dev(self, cache: BlkidCache, devname: str, flags: int) -> BlkidDevice:
dev = BlkidDevice(self)
self.obj.blkid_get_dev.argtypes = (c_void_p, c_char_p, c_int, )
self.obj.blkid_get_dev.restype = c_void_p
dev.ptr = self.obj.blkid_get_dev(cache.ptr, devname, flags)
return dev
def get_dev_size(self, fd: int):
self.obj.blkid_get_dev_size.argtypes = (c_int, )
self.obj.blkid_get_dev_size.restype = c_uint64
return self.obj.blkid_get_dev_size(fd)
def verify(self, cache: BlkidCache, dev: BlkidDevice) -> BlkidDevice:
ret = BlkidDevice(self)
self.obj.blkid_verify.argtypes = (c_void_p, c_void_p, )
self.obj.blkid_verify.restype = c_void_p
ret.ptr = self.obj.blkid_verify(cache.ptr, dev.ptr)
return ret
def get_tag_value(self, iter: BlkidDeviceIterate=None, tagname: str=None, devname: str=None) -> str:
self.obj.blkid_get_tag_value.argtypes = (c_void_p, c_char_p, c_char_p, )
self.obj.blkid_get_tag_value.restype = c_char_p
i = iter.ptr if iter else None
t = tagname.encode() if tagname else None
d = devname.encode() if devname else None
ret = self.obj.blkid_get_tag_value(i, t, d)
return ret.decode() if ret else None
def get_devname(self, iter: BlkidDeviceIterate=None, token: str=None, value: str=None) -> str:
self.obj.blkid_get_devname.argtypes = (c_void_p, c_char_p, c_char_p, )
self.obj.blkid_get_devname.restype = c_char_p
i = iter.ptr if iter else None
t = token.encode() if token else None
v = value.encode() if value else None
ret = self.obj.blkid_get_devname(i, t, v)
return ret.decode() if ret else None
def tag_iterate_begin(self, dev: BlkidDevice) -> BlkidTagIterate:
ret = BlkidTagIterate(self)
self.obj.blkid_tag_iterate_begin.argtypes = (c_void_p, )
self.obj.blkid_tag_iterate_begin.restype = c_void_p
ret.ptr = self.obj.blkid_tag_iterate_begin(dev.ptr)
return ret
def tag_next(self, iter: BlkidTagIterate) -> tuple[int, str, str]:
type = POINTER(c_char_p)
value = POINTER(c_char_p)
self.obj.blkid_tag_next.argtypes = (c_void_p, c_void_p, c_void_p, )
self.obj.blkid_tag_next.restype = c_int
ret = self.obj.blkid_tag_next(iter.ptr, type, value)
return (ret, type, value)
def tag_iterate_end(self, iter: BlkidTagIterate):
self.obj.blkid_tag_iterate_end.argtypes = (c_void_p, )
self.obj.blkid_tag_iterate_end(iter.ptr)
def dev_has_tag(self, dev: BlkidDevice, type: str=None, value: str=None) -> int:
self.obj.blkid_dev_has_tag.argtypes = (c_void_p, str, str)
self.obj.blkid_dev_has_tag.restype = c_int
return self.obj.blkid_dev_has_tag(dev.ptr, type, value)
def find_dev_with_tag(self, cache: BlkidCache, type: str=None, value: str=None) -> BlkidDevice:
self.obj.blkid_find_dev_with_tag.argtypes = (c_void_p, str, str)
self.obj.blkid_find_dev_with_tag.restype = c_void_p
dev = BlkidDevice(self)
dev.ptr = self.obj.blkid_find_dev_with_tag(cache.ptr, type, value)
return dev
def parse_tag_string(self, token: str) -> tuple[int, str, str]:
self.obj.blkid_parse_tag_string.argtypes = (c_char_p, c_void_p, c_void_p, )
self.obj.blkid_parse_tag_string.restype = c_int
type = POINTER(c_char_p)
value = POINTER(c_char_p)
ret = self.obj.blkid_parse_tag_string(token, type, value)
return (ret, type, value)
def parse_version_string(self, ver_string: str) -> int:
self.obj.blkid_parse_version_string.argtypes = (c_char_p, )
self.obj.blkid_parse_version_string.restype = c_int
return self.obj.blkid_parse_version_string(ver_string)
def get_library_version(self) -> tuple[int, str, str]:
self.obj.blkid_get_library_version.argtypes = (c_char_p, c_char_p, )
self.obj.blkid_get_library_version.restype = c_int
ver = POINTER(c_char_p)
date = POINTER(c_char_p)
ret = self.obj.blkid_get_library_version(ver, date)
return (ret, ver, date)
def encode_string(self, str: str, str_enc: str, len: int) -> int:
self.obj.blkid_encode_string.argtypes = (c_char_p, c_char_p, c_uint64, )
self.obj.blkid_encode_string.restype = c_int
return self.obj.blkid_encode_string(str, str_enc, len)
def safe_string(self, str: str, str_safe: str, len: int) -> int:
self.obj.blkid_safe_string.argtypes = (c_char_p, c_char_p, c_uint64, )
self.obj.blkid_safe_string.restype = c_int
return self.obj.blkid_safe_string(str, str_safe, len)
def send_uevent(self, devname: str, action: str) -> int:
self.obj.blkid_send_uevent.argtypes = (c_char_p, c_char_p, )
self.obj.blkid_send_uevent.restype = c_int
return self.obj.blkid_send_uevent(devname, action)
def evaluate_tag(self, token: str, value: str=None, cache: BlkidCache=None) -> str:
self.obj.blkid_evaluate_tag.argtypes = (c_char_p, c_char_p, c_void_p, )
self.obj.blkid_evaluate_tag.restype = c_char_p
t = token.encode() if token else None
v = value.encode() if value else None
c = cache.pptr if cache else None
ret = self.obj.blkid_evaluate_tag(t, v, c)
return ret.decode() if ret else None
def evaluate_spec(self, spec: str, cache: BlkidCache) -> str:
self.obj.blkid_evaluate_tag.argtypes = (c_char_p, c_void_p, )
self.obj.blkid_evaluate_tag.restype = c_char_p
s = spec.encode() if spec else None
c = cache.pptr if cache else None
ret = self.obj.blkid_evaluate_spec(s, c)
return ret.decode() if ret else None
def new_probe(self) -> BlkidProbe:
self.obj.blkid_new_probe.argtypes = ()
self.obj.blkid_new_probe.restype = c_void_p
return BlkidProbe(self, self.obj.blkid_new_probe())
def new_probe_from_filename(self, filename: str) -> BlkidProbe:
self.obj.blkid_new_probe_from_filename.argtypes = (c_char_p, )
self.obj.blkid_new_probe_from_filename.restype = c_void_p
return BlkidProbe(self, self.obj.blkid_new_probe_from_filename(filename))
def free_probe(self, pr: BlkidProbe):
self.obj.blkid_free_probe.argtypes = (c_void_p, )
self.obj.blkid_free_probe.restype = None
self.obj.blkid_free_probe(pr.ptr)
def reset_probe(self, pr: BlkidProbe):
self.obj.blkid_reset_probe.argtypes = (c_void_p, )
self.obj.blkid_reset_probe.restype = None
self.obj.blkid_reset_probe(pr.ptr)
def probe_reset_buffers(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_reset_buffers.argtypes = (c_void_p, )
self.obj.blkid_probe_reset_buffers.restype = c_int
return self.obj.blkid_probe_reset_buffers(pr.ptr)
def probe_hide_range(self, pr: BlkidProbe, off: int, len: int) -> int:
self.obj.blkid_probe_hide_range.argtypes = (c_void_p, c_uint64, c_uint64, )
self.obj.blkid_probe_hide_range.restype = c_int
return self.obj.blkid_probe_hide_range(pr.ptr, off, len)
def probe_set_device(self, pr: BlkidProbe, fd: int, off: int, size: int) -> int:
self.obj.blkid_probe_set_device.argtypes = (c_void_p, c_int, c_uint64, c_uint64, )
self.obj.blkid_probe_set_device.restype = c_int
return self.obj.blkid_probe_set_device(pr.ptr, fd, off, size)
def probe_get_devno(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_get_devno.argtypes = (c_void_p, )
self.obj.blkid_probe_get_devno.restype = c_uint64
return self.obj.blkid_probe_get_devno()
def probe_get_wholedisk_devno(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_get_wholedisk_devno.argtypes = (c_void_p, )
self.obj.blkid_probe_get_wholedisk_devno.restype = c_uint64
return self.obj.blkid_probe_get_wholedisk_devno()
def probe_is_wholedisk(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_is_wholedisk.argtypes = (c_void_p, )
self.obj.blkid_probe_is_wholedisk.restype = c_int
return self.obj.blkid_probe_is_wholedisk(pr.ptr)
def probe_get_size(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_get_size.argtypes = (c_void_p, )
self.obj.blkid_probe_get_size.restype = c_uint64
return self.obj.blkid_probe_get_size(pr.ptr)
def probe_get_offset(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_get_offset.argtypes = (c_void_p, )
self.obj.blkid_probe_get_offset.restype = c_uint64
return self.obj.blkid_probe_get_offset(pr.ptr)
def probe_get_sectorsize(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_get_sectorsize.argtypes = (c_void_p, )
self.obj.blkid_probe_get_sectorsize.restype = c_uint
return self.obj.blkid_probe_get_sectorsize(pr.ptr)
def probe_set_sectorsize(self, pr: BlkidProbe, sz: int) -> int:
self.obj.blkid_probe_set_sectorsize.argtypes = (c_void_p, c_uint, )
self.obj.blkid_probe_set_sectorsize.restype = c_int
return self.obj.blkid_probe_set_sectorsize(pr.ptr, sz)
def probe_get_sectors(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_get_sectors.argtypes = (c_void_p, )
self.obj.blkid_probe_get_sectors.restype = c_uint64
return self.obj.blkid_probe_get_sectors(pr.ptr)
def probe_get_fd(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_get_fd.argtypes = (c_void_p, )
self.obj.blkid_probe_get_fd.restype = c_int
return self.obj.blkid_probe_get_fd(pr.ptr)
def probe_set_hint(self, pr: BlkidProbe, name: str, value: int) -> int:
self.obj.blkid_probe_set_hint.argtypes = (c_void_p, c_char_p, c_uint64, )
self.obj.blkid_probe_set_hint.restype = c_int
return self.obj.blkid_probe_set_hint(pr.ptr, name, value)
def probe_reset_hints(self, pr: BlkidProbe):
self.obj.blkid_probe_reset_hints.argtypes = (c_void_p, )
self.obj.blkid_probe_reset_hints.restype = None
self.obj.blkid_probe_reset_hints(pr.ptr)
def known_fstype(self, fstype: str) -> int:
self.obj.blkid_known_fstype.argtypes = (c_char_p, )
self.obj.blkid_known_fstype.restype = c_int
return self.obj.blkid_known_fstype(fstype)
def superblocks_get_name(self, idx: int, name: str, usage: int) -> int:
self.obj.blkid_superblocks_get_name.argtypes = (c_uint64, c_void_p, c_void_p, )
self.obj.blkid_superblocks_get_name.restype = c_int
name = POINTER(c_char_p)
usage = POINTER(c_int)
return self.obj.blkid_superblocks_get_name(idx, name, usage)
def probe_enable_superblocks(self, pr: BlkidProbe, enable: bool) -> int:
self.obj.blkid_probe_enable_superblocks.argtypes = (c_void_p, c_int, )
self.obj.blkid_probe_enable_superblocks.restype = c_int
return self.obj.blkid_probe_enable_superblocks(pr.ptr, enable)
def probe_set_superblocks_flags(self, pr: BlkidProbe, flags: int) -> int:
self.obj.blkid_probe_set_superblocks_flags.argtypes = (c_void_p, c_int, )
self.obj.blkid_probe_set_superblocks_flags.restype = c_int
return self.obj.blkid_probe_set_superblocks_flags(pr.ptr, flags)
def probe_reset_superblocks_filter(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_reset_superblocks_filter.argtypes = (c_void_p, )
self.obj.blkid_probe_reset_superblocks_filter.restype = c_int
return self.obj.blkid_probe_reset_superblocks_filter(pr.ptr)
def probe_invert_superblocks_filter(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_invert_superblocks_filter.argtypes = (c_void_p, )
self.obj.blkid_probe_invert_superblocks_filter.restype = c_int
return self.obj.blkid_probe_invert_superblocks_filter(pr.ptr)
def probe_filter_superblocks_type(self, pr: BlkidProbe, flag: int, names: list[str]) -> int:
self.obj.blkid_probe_filter_superblocks_type.argtypes = (c_void_p, c_int, c_void_p)
self.obj.blkid_probe_filter_superblocks_type.restype = c_int
return self.obj.blkid_probe_filter_superblocks_type(pr.ptr, flag, names)
def probe_filter_superblocks_usage(self, pr: BlkidProbe, flag: int, usage: int) -> int:
self.obj.blkid_probe_filter_superblocks_usage.argtypes = (c_void_p, c_int, c_int, )
self.obj.blkid_probe_filter_superblocks_usage.restype = c_int
return self.obj.blkid_probe_filter_superblocks_usage(pr.ptr, flag, usage)
def probe_enable_topology(self, pr: BlkidProbe, enable: bool) -> int:
self.obj.blkid_probe_enable_topology.argtypes = (c_void_p, c_int, )
self.obj.blkid_probe_enable_topology.restype = c_int
return self.obj.blkid_probe_enable_topology(pr.ptr, enable)
def probe_get_topology(self, pr: BlkidProbe) -> BlkidTopology:
self.obj.blkid_probe_get_topology.argtypes = (c_void_p, )
self.obj.blkid_probe_get_topology.restype = c_void_p
return BlkidTopology(self, self.obj.blkid_probe_get_topology(pr.ptr))
def topology_get_alignment_offset(self, tp: BlkidTopology) -> int:
self.obj.blkid_topology_get_alignment_offset.argtypes = (c_void_p, )
self.obj.blkid_topology_get_alignment_offset.restype = c_ulong
return self.obj.blkid_topology_get_alignment_offset(tp.ptr)
def topology_get_minimum_io_size(self, tp: BlkidTopology) -> int:
self.obj.blkid_topology_get_minimum_io_size.argtypes = (c_void_p, )
self.obj.blkid_topology_get_minimum_io_size.restype = c_ulong
return self.obj.blkid_topology_get_minimum_io_size(tp.ptr)
def topology_get_optimal_io_size(self, tp: BlkidTopology) -> int:
self.obj.blkid_topology_get_optimal_io_size.argtypes = (c_void_p, )
self.obj.blkid_topology_get_optimal_io_size.restype = c_ulong
return self.obj.blkid_topology_get_optimal_io_size(tp.ptr)
def topology_get_logical_sector_size(self, tp: BlkidTopology) -> int:
self.obj.blkid_topology_get_logical_sector_size.argtypes = (c_void_p, )
self.obj.blkid_topology_get_logical_sector_size.restype = c_ulong
return self.obj.blkid_topology_get_logical_sector_size(tp.ptr)
def topology_get_physical_sector_size(self, tp: BlkidTopology) -> int:
self.obj.blkid_topology_get_physical_sector_size.argtypes = (c_void_p, )
self.obj.blkid_topology_get_physical_sector_size.restype = c_ulong
return self.obj.blkid_topology_get_physical_sector_size(tp.ptr)
def topology_get_dax(self, tp: BlkidTopology) -> int:
self.obj.blkid_topology_get_dax.argtypes = (c_void_p, )
self.obj.blkid_topology_get_dax.restype = c_ulong
return self.obj.blkid_topology_get_dax(tp.ptr)
def topology_get_diskseq(self, tp: BlkidTopology) -> int:
self.obj.blkid_topology_get_diskseq.argtypes = (c_void_p, )
self.obj.blkid_topology_get_diskseq.restype = c_uint64
return self.obj.blkid_topology_get_diskseq(tp.ptr)
def known_pttype(self, pttype: str) -> int:
self.obj.blkid_known_pttype.argtypes = (c_char_p, )
self.obj.blkid_known_pttype.restype = c_int
return self.obj.blkid_known_pttype(pttype)
def partitions_get_name(self, idx: int, name: str) -> tuple[int, str]:
self.obj.blkid_partitions_get_name.argtypes = (c_uint64, c_void_p, )
self.obj.blkid_partitions_get_name.restype = c_int
tname = c_char_p(name.encode())
pname = pointer(tname)
ret = self.obj.blkid_partitions_get_name(idx, pname)
return (ret, tname.decode())
def probe_enable_partitions(self, pr: BlkidProbe, enable: c_bool) -> int:
self.obj.blkid_probe_enable_partitions.argtypes = (c_void_p, c_int, )
self.obj.blkid_probe_enable_partitions.restype = c_int
return self.obj.blkid_probe_enable_partitions(pr.ptr, enable)
def probe_reset_partitions_filter(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_reset_partitions_filter.argtypes = (c_void_p, )
self.obj.blkid_probe_reset_partitions_filter.restype = c_int
return self.obj.blkid_probe_reset_partitions_filter(pr.ptr)
def probe_invert_partitions_filter(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_invert_partitions_filter.argtypes = (c_void_p, )
self.obj.blkid_probe_invert_partitions_filter.restype = c_int
return self.obj.blkid_probe_invert_partitions_filter(pr.ptr)
def probe_filter_partitions_type(self, pr: BlkidProbe, flag: int, names: list[str]) -> int:
self.obj.blkid_probe_filter_partitions_type.argtypes = (c_void_p, c_int, c_void_p, )
self.obj.blkid_probe_filter_partitions_type.restype = c_int
return self.obj.blkid_probe_filter_partitions_type(pr.ptr, flag, names)
def probe_set_partitions_flags(self, pr: BlkidProbe, flags: int) -> int:
self.obj.blkid_probe_set_partitions_flags.argtypes = (c_void_p, c_int, )
self.obj.blkid_probe_set_partitions_flags.restype = c_int
return self.obj.blkid_probe_set_partitions_flags(pr.ptr, flags)
def probe_get_partitions(self, pr: BlkidProbe) -> BlkidPartList:
self.obj.blkid_probe_get_partitions.argtypes = (c_void_p, )
self.obj.blkid_probe_get_partitions.restype = c_void_p
return BlkidPartList(self, self.obj.blkid_probe_get_partitions(pr.ptr))
def partlist_numof_partitions(self, ls: BlkidPartList) -> int:
self.obj.blkid_partlist_numof_partitions.argtypes = (c_void_p, )
self.obj.blkid_partlist_numof_partitions.restype = c_int
return self.obj.blkid_partlist_numof_partitions(ls.ptr)
def partlist_get_table(self, ls: BlkidPartList) -> BlkidPartTable:
self.obj.blkid_partlist_get_table.argtypes = (c_void_p, )
self.obj.blkid_partlist_get_table.restype = c_void_p
return BlkidPartTable(self, self.obj.blkid_partlist_get_table(ls.ptr))
def partlist_get_partition(self, ls: BlkidPartList, n: int) -> BlkidPartition:
self.obj.blkid_partlist_get_partition.argtypes = (c_void_p, c_int, )
self.obj.blkid_partlist_get_partition.restype = c_void_p
return BlkidPartition(self, self.obj.blkid_partlist_get_partition(ls.ptr, n))
def partlist_get_partition_by_partno(self, ls: BlkidPartList, n: int) -> BlkidPartition:
self.obj.blkid_partlist_get_partition_by_partno.argtypes = (c_void_p, c_int, )
self.obj.blkid_partlist_get_partition_by_partno.restype = c_void_p
return BlkidPartition(self, self.obj.blkid_partlist_get_partition_by_partno(ls.ptr, n))
def partlist_devno_to_partition(self, ls: BlkidPartList, devno: int) -> BlkidPartition:
self.obj.blkid_partlist_devno_to_partition.argtypes = (c_void_p, c_int, )
self.obj.blkid_partlist_devno_to_partition.restype = c_void_p
return BlkidPartition(self, self.obj.blkid_partlist_devno_to_partition(ls.ptr, devno))
def partition_get_table(self, par: BlkidPartition) -> BlkidPartTable:
self.obj.blkid_partition_get_table.argtypes = (c_void_p, )
self.obj.blkid_partition_get_table.restype = c_void_p
return BlkidPartTable(self, self.obj.blkid_partition_get_table(par.ptr))
def partition_get_name(self, par: BlkidPartition) -> str:
self.obj.blkid_partition_get_name.argtypes = (c_void_p, )
self.obj.blkid_partition_get_name.restype = c_char_p
return self.obj.blkid_partition_get_name(par.ptr).decode()
def partition_get_uuid(self, par: BlkidPartition) -> str:
self.obj.blkid_partition_get_uuid.argtypes = (c_void_p, )
self.obj.blkid_partition_get_uuid.restype = c_char_p
return self.obj.blkid_partition_get_uuid(par.ptr).decode()
def partition_get_partno(self, par: BlkidPartition) -> int:
self.obj.blkid_partition_get_partno.argtypes = (c_void_p, )
self.obj.blkid_partition_get_partno.restype = c_int
return self.obj.blkid_partition_get_partno(par.ptr)
def partition_get_start(self, par: BlkidPartition) -> int:
self.obj.blkid_partition_get_start.argtypes = (c_void_p, )
self.obj.blkid_partition_get_start.restype = c_int
return self.obj.blkid_partition_get_start(par.ptr)
def partition_get_size(self, par: BlkidPartition) -> int:
self.obj.blkid_partition_get_size.argtypes = (c_void_p, )
self.obj.blkid_partition_get_size.restype = c_int
return self.obj.blkid_partition_get_size(par.ptr)
def partition_get_type(self, par: BlkidPartition) -> int:
self.obj.blkid_partition_get_type.argtypes = (c_void_p, )
self.obj.blkid_partition_get_type.restype = c_int
return self.obj.blkid_partition_get_type(par.ptr)
def partition_get_type_string(self, par: BlkidPartition) -> str:
self.obj.blkid_partition_get_type_string.argtypes = (c_void_p, )
self.obj.blkid_partition_get_type_string.restype = c_char_p
return self.obj.blkid_partition_get_type_string(par.ptr).decode()
def partition_get_flags(self, par: BlkidPartition) -> int:
self.obj.blkid_partition_get_flags.argtypes = (c_void_p, )
self.obj.blkid_partition_get_flags.restype = c_int
return self.obj.blkid_partition_get_flags(par.ptr)
def partition_is_logical(self, par: BlkidPartition) -> bool:
self.obj.blkid_partition_is_logical.argtypes = (c_void_p, )
self.obj.blkid_partition_is_logical.restype = c_int
return bool(self.obj.blkid_partition_is_logical(par.ptr))
def partition_is_extended(self, par: BlkidPartition) -> bool:
self.obj.blkid_partition_is_extended.argtypes = (c_void_p, )
self.obj.blkid_partition_is_extended.restype = c_int
return bool(self.obj.blkid_partition_is_extended(par.ptr))
def partition_is_primary(self, par: BlkidPartition) -> bool:
self.obj.blkid_partition_is_primary.argtypes = (c_void_p, )
self.obj.blkid_partition_is_primary.restype = c_int
return bool(self.obj.blkid_partition_is_primary(par.ptr))
def parttable_get_type(self, tab: BlkidPartTable) -> str:
self.obj.blkid_parttable_get_type.argtypes = (c_void_p, )
self.obj.blkid_parttable_get_type.restype = c_char_p
return self.obj.blkid_parttable_get_type(tab.ptr).decode()
def parttable_get_id(self, tab: BlkidPartTable) -> str:
self.obj.blkid_parttable_get_id.argtypes = (c_void_p, )
self.obj.blkid_parttable_get_id.restype = c_char_p
return self.obj.blkid_parttable_get_id(tab.ptr).decode()
def parttable_get_offset(self, tab: BlkidPartTable) -> int:
self.obj.blkid_parttable_get_offset.argtypes = (c_void_p, )
self.obj.blkid_parttable_get_offset.restype = c_int
return self.obj.blkid_parttable_get_offset(tab.ptr)
def parttable_get_parent(self, tab: BlkidPartTable) -> BlkidPartition:
self.obj.blkid_parttable_get_parent.argtypes = (c_void_p, )
self.obj.blkid_parttable_get_parent.restype = c_void_p
return BlkidPartition(self, self.obj.blkid_parttable_get_parent(tab.ptr))
def do_probe(self, pr: BlkidProbe) -> int:
self.obj.blkid_do_probe.argtypes = (c_void_p, )
self.obj.blkid_do_probe.restype = c_int
return self.obj.blkid_do_probe(pr.ptr)
def do_safeprobe(self, pr: BlkidProbe) -> int:
self.obj.blkid_do_safeprobe.argtypes = (c_void_p, )
self.obj.blkid_do_safeprobe.restype = c_int
return self.obj.blkid_do_safeprobe(pr.ptr)
def do_fullprobe(self, pr: BlkidProbe) -> int:
self.obj.blkid_do_fullprobe.argtypes = (c_void_p, )
self.obj.blkid_do_fullprobe.restype = c_int
return self.obj.blkid_do_fullprobe(pr.ptr)
def probe_numof_values(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_numof_values.argtypes = (c_void_p, )
self.obj.blkid_probe_numof_values.restype = c_int
return self.obj.blkid_probe_numof_values(pr.ptr)
def probe_get_value(self, pr: BlkidProbe, num: int) -> tuple[int, str, str, int]:
self.obj.blkid_probe_get_value.argtypes = (c_void_p, c_int, c_void_p, c_void_p, c_void_p, )
self.obj.blkid_probe_get_value.restype = c_int
name = POINTER(c_char_p)
data = POINTER(c_char_p)
len = POINTER(c_uint64)
ret = self.obj.blkid_probe_get_value(pr.ptr, num, name, data, len)
return (ret, name, data, len)
def probe_lookup_value(self, pr: BlkidProbe, name: str, data: str, len: int) -> int:
self.obj.blkid_probe_lookup_value.argtypes = (c_void_p, c_char_p, c_void_p, c_void_p, )
self.obj.blkid_probe_lookup_value.restype = c_int
data = POINTER(c_char_p)
len = POINTER(c_uint64)
return self.obj.blkid_probe_lookup_value(pr, name, data, len)
def probe_has_value(self, pr: BlkidProbe, name: str) -> int:
self.obj.blkid_probe_has_value.argtypes = (c_void_p, c_char_p, )
self.obj.blkid_probe_has_value.restype = c_int
return self.obj.blkid_probe_has_value(pr.ptr, name)
def do_wipe(self, pr: BlkidProbe, dryrun: bool=False) -> int:
self.obj.blkid_do_wipe.argtypes = (c_void_p, c_int, )
self.obj.blkid_do_wipe.restype = c_int
return self.obj.blkid_do_wipe(pr.ptr, dryrun)
def wipe_all(self, pr: BlkidProbe) -> int:
self.obj.blkid_wipe_all.argtypes = (c_void_p, )
self.obj.blkid_wipe_all.restype = c_int
return self.obj.blkid_wipe_all(pr.ptr)
def probe_step_back(self, pr: BlkidProbe) -> int:
self.obj.blkid_probe_step_back.argtypes = (c_void_p, )
self.obj.blkid_probe_step_back.restype = c_int
return self.obj.blkid_probe_step_back(pr.ptr)

64
builder/lib/cgroup.py Normal file
View File

@ -0,0 +1,64 @@
import os
import time
import signal
from logging import getLogger
log = getLogger(__name__)
class CGroup:
fs: str = "/sys/fs/cgroup"
name: str
@property
def path(self) -> str:
return os.path.join(self.fs, self.name)
@property
def valid(self) -> bool:
return os.path.exists(self.path)
def create(self):
if self.valid: return
os.mkdir(self.path)
def destroy(self):
if not self.valid: return
os.rmdir(self.path)
def add_pid(self, pid: int):
if not self.valid: return
procs = os.path.join(self.path, "cgroup.procs")
with open(procs, "w") as f:
f.write(f"{pid}\n")
def list_pid(self) -> list[int]:
ret: list[int] = []
if not self.valid: return ret
procs = os.path.join(self.path, "cgroup.procs")
with open(procs, "r") as f:
for line in f:
ret.append(int(line))
return ret
def kill_all(self, sig: int = signal.SIGTERM, timeout: int = 10, kill: int = 8):
if not self.valid: return
pids = self.list_pid()
remain = 0
while True:
for pid in pids:
log.debug(f"killing {pid}")
try: os.kill(pid, sig)
except: pass
try: os.waitpid(-1, os.WNOHANG)
except: pass
pids = self.list_pid()
if len(pids) <= 0: break
if 0 < kill <= remain:
sig = signal.SIGKILL
if remain >= timeout:
raise TimeoutError("killing pids timedout")
time.sleep(1)
def __init__(self, name: str, fs: str = None):
if fs: self.fs = fs
self.name = name

104
builder/lib/config.py Normal file
View File

@ -0,0 +1,104 @@
import os
import yaml
from logging import getLogger
from builder.lib import json
from builder.lib.cpu import cpu_arch_compatible
from builder.lib.context import ArchBuilderContext
log = getLogger(__name__)
class ArchBuilderConfigError(Exception):
pass
def _dict_merge(dst: dict, src: dict):
for key in src.keys():
st = type(src[key])
if key in dst and st is type(dst[key]):
if st == list:
dst[key].extend(src[key])
continue
if st == dict:
_dict_merge(dst[key], src[key])
continue
dst[key] = src[key]
def load_config_file(ctx: ArchBuilderContext, path: str):
"""
Load one config (yaml/json) to context
"""
log.debug(f"try to open config {path}")
try:
with open(path, "r") as f:
if path.endswith((".yml", ".yaml")):
log.debug(f"load {path} as yaml")
loaded = yaml.safe_load(f)
elif path.endswith((".jsn", ".json")):
log.debug(f"load {path} as json")
loaded = json.load(f)
log.info(f"loaded config {path}")
except BaseException:
log.error(f"failed to load config {path}")
raise
def _proc_include(inc: str | list[str]):
pt = type(inc)
if pt is str: inc = [inc]
elif pt is list: pass
else: raise ArchBuilderConfigError("bad type for also")
load_configs(ctx, inc)
if loaded is None: return
if "+also" in loaded:
_proc_include(loaded["+also"])
loaded.pop("+also")
if ctx.config is None:
log.debug(f"use {path} as current config")
ctx.config = loaded
else:
log.debug(f"merge {path} into current config")
_dict_merge(ctx.config, loaded)
if "+then" in loaded:
_proc_include(loaded["+then"])
loaded.pop("+then")
def populate_config(ctx: ArchBuilderContext):
ctx.finish_config()
ctx.resolve_subscript()
if "target" not in ctx.config:
raise ArchBuilderConfigError("no target set")
if "arch" not in ctx.config:
raise ArchBuilderConfigError("no cpu arch set")
ctx.target = ctx.config["target"]
ctx.tgt_arch = ctx.config["arch"]
if ctx.tgt_arch == "any" or ctx.cur_arch == "any":
raise ArchBuilderConfigError("bad cpu arch value")
if not cpu_arch_compatible(ctx.tgt_arch, ctx.cur_arch):
log.warning(
f"current cpu arch {ctx.cur_arch} is not compatible to {ctx.tgt_arch}, "
"you may need qemu-user-static-binfmt to run incompatible executables",
)
jstr = json.dumps(ctx.config, indent=2)
log.debug(f"populated config:\n {jstr}")
def load_configs(ctx: ArchBuilderContext, configs: list[str]):
"""
Load multiple config to context
"""
loaded = 0
for config in configs:
success = False
for suffix in ["yml", "yaml", "jsn", "json"]:
fn = f"{config}.{suffix}"
path = os.path.join(ctx.dir, "configs", fn)
if os.path.exists(path):
load_config_file(ctx, path)
loaded += 1
success = True
if not success:
raise FileNotFoundError(f"config {config} not found")
if loaded > 0:
if ctx.config is None:
raise ArchBuilderConfigError("no any config loaded")
log.debug(f"loaded {loaded} configs")

156
builder/lib/context.py Normal file
View File

@ -0,0 +1,156 @@
import os
from copy import deepcopy
from subprocess import Popen, PIPE
from logging import getLogger
from builder.lib.cpu import cpu_arch_get
from builder.lib.utils import parse_cmd_args
from builder.lib.subscript import dict_get
from builder.lib.loop import loop_detach
from builder.lib.mount import MountTab
from builder.lib.cgroup import CGroup
from builder.lib.subscript import SubScript
from builder.lib.shadow import PasswdFile, GroupFile
log = getLogger(__name__)
class ArchBuilderContext:
"""
Config from configs/{CONFIG}.yaml
"""
config: dict = {}
config_orig: dict = {}
"""
Target name
"""
target: str = None
tgt_arch: str = None
"""
CPU architecture
"""
cur_arch: str = cpu_arch_get()
"""
RootFS ready for chroot
"""
chroot: bool = False
"""
Repack rootfs only
"""
repack: bool = False
"""
Top tree folder
"""
dir: str = None
"""
Workspace folder
"""
work: str = None
"""
Current mounted list
"""
mounted: MountTab = MountTab()
"""
fstab for rootfs
"""
fstab: MountTab = MountTab()
"""
Enable GPG check for pacman packages and databases
"""
gpgcheck: bool = True
"""
Control group for chroot
"""
cgroup: CGroup = None
"""
File system map for host
"""
fsmap: dict = {}
"""
Loopback block for build
"""
loops: list[str] = []
"""
User config for rootfs
"""
passwd: PasswdFile = PasswdFile()
group: GroupFile = GroupFile()
def get(self, key: str, default=None):
try: return dict_get(key, self.config)
except: return default
def get_rootfs(self): return os.path.join(self.work, "rootfs")
def get_output(self): return os.path.join(self.work, "output")
def get_mount(self): return os.path.join(self.work, "mount")
def __init__(self):
self.cgroup = CGroup("arch-image-builder")
self.cgroup.create()
def __deinit__(self):
self.cleanup()
def cleanup(self):
from builder.build.mount import undo_mounts
self.cgroup.kill_all()
self.cgroup.destroy()
undo_mounts(self)
for loop in self.loops:
log.debug(f"detaching loop {loop}")
loop_detach(loop)
def run_external(
self,
cmd: str | list[str],
/,
cwd: str = None,
env: dict = None,
stdin: str | bytes = None
) -> int:
"""
Run external command
run_external("mke2fs -t ext4 ext4.img")
"""
args = parse_cmd_args(cmd)
argv = " ".join(args)
log.debug(f"running external command {argv}")
fstdin = None if stdin is None else PIPE
proc = Popen(args, cwd=cwd, env=env, stdin=fstdin)
self.cgroup.add_pid(proc.pid)
if stdin:
if type(stdin) is str: stdin = stdin.encode()
proc.stdin.write(stdin)
proc.stdin.close()
ret = proc.wait()
log.debug(f"command exit with {ret}")
return ret
def reload_passwd(self):
root = self.get_rootfs()
pf = os.path.join(root, "etc/passwd")
gf = os.path.join(root, "etc/group")
self.passwd.unload()
self.group.unload()
if os.path.exists(pf): self.passwd.load_file(pf)
if os.path.exists(gf): self.group.load_file(gf)
def finish_config(self):
self.config_orig = deepcopy(self.config)
def resolve_subscript(self):
ss = SubScript()
self.config = deepcopy(self.config_orig)
ss.parse(self.config)

74
builder/lib/cpu.py Normal file
View File

@ -0,0 +1,74 @@
import os
from logging import getLogger
log = getLogger(__name__)
def cpu_arch_name_map(name: str) -> str:
"""
Map cpu arch name to archlinux names
cpu_arch_name_map("amd64") = "x86_64"
cpu_arch_name_map("x86_64") = "x86_64"
cpu_arch_name_map("ARM64") = "arm64"
"""
match name.lower():
case "x64" | "amd64" | "intel64": return "x86_64"
case "i386" | "i486" | "i586" | "x86" | "ia32": return "i686"
case "arm64" | "armv8a" | "armv8" | "arm-v8a" | "arm-v8" | "aa64": return "aarch64"
case "arm32" | "aarch32" | "aa32" | "armv7" | "armv7l" | "arm-v7" | "arm-v7l" | "arm-v7h": return "armv7h"
case _: return name.lower()
def cpu_arch_get_raw() -> str:
"""
Get current cpu arch
cpu_arch_get() = "amd64"
cpu_arch_get() = "x86_64"
cpu_arch_get() = "arm64"
"""
return os.uname().machine
def cpu_arch_get() -> str:
"""
Get current cpu arch and map to archlinux names
cpu_arch_get() = "x86_64"
cpu_arch_get() = "arm64"
"""
return cpu_arch_name_map(cpu_arch_get_raw())
def cpu_arch_compatible_one(
supported: str,
current: str = cpu_arch_get_raw()
) -> bool:
"""
Is current cpu compatible with supported
cpu_arch_compatible("any", "x86_64") = True
cpu_arch_compatible("any", "aarch64") = True
cpu_arch_compatible("aarch64", "x86_64") = False
cpu_arch_compatible("x86_64", "x86_64") = True
"""
cur = cpu_arch_name_map(current.strip())
name = cpu_arch_name_map(supported.strip())
if len(name) == 0: return False
return name == cur or name == "any"
def cpu_arch_compatible(
supported: str | list[str],
current: str = cpu_arch_get_raw()
) -> bool:
"""
Is current cpu compatible with supported list
cpu_arch_compatible("any", "x86_64") = True
cpu_arch_compatible("any", "aarch64") = True
cpu_arch_compatible("aarch64", "x86_64") = False
cpu_arch_compatible("x86_64,aarch64", "x86_64") = True
"""
if type(supported) is str: arch = supported.split(",")
elif type(supported) is list: arch = supported
else: raise TypeError("unknown type for supported")
for cpu in arch:
if cpu_arch_compatible_one(cpu, current):
return True
return True

78
builder/lib/json.py Normal file
View File

@ -0,0 +1,78 @@
import json
from uuid import UUID
from builder.lib import serializable
class SerializableEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, UUID):
return str(o)
if isinstance(o, serializable.SerializableDict):
return o.to_dict()
if isinstance(o, serializable.SerializableList):
return o.to_list()
if isinstance(o, serializable.Serializable):
return o.serialize()
return super().default(o)
def dump(
obj, fp, *,
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
cls=None,
indent=None,
separators=None,
default=None,
sort_keys=False,
**kw
):
if cls is None: cls = SerializableEncoder
return json.dump(
obj, fp,
skipkeys=skipkeys,
ensure_ascii=ensure_ascii,
check_circular=check_circular,
allow_nan=allow_nan,
cls=cls,
indent=indent,
separators=separators,
default=default,
sort_keys=sort_keys,
**kw
)
def dumps(
obj, *,
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
cls=None,
indent=None,
separators=None,
default=None,
sort_keys=False,
**kw
):
if cls is None: cls = SerializableEncoder
return json.dumps(
obj,
skipkeys=skipkeys,
ensure_ascii=ensure_ascii,
check_circular=check_circular,
allow_nan=allow_nan,
cls=cls,
indent=indent,
separators=separators,
default=default,
sort_keys=sort_keys,
**kw
)
load = json.load
loads = json.loads

229
builder/lib/loop.py Normal file
View File

@ -0,0 +1,229 @@
import io
import os
import stat
import fcntl
import ctypes
from builder.lib import utils
LO_NAME_SIZE = 64
LO_KEY_SIZE = 32
LO_FLAGS_READ_ONLY = 1
LO_FLAGS_AUTOCLEAR = 4
LO_FLAGS_PARTSCAN = 8
LO_FLAGS_DIRECT_IO = 16
LO_CRYPT_NONE = 0
LO_CRYPT_XOR = 1
LO_CRYPT_DES = 2
LO_CRYPT_FISH2 = 3
LO_CRYPT_BLOW = 4
LO_CRYPT_CAST128 = 5
LO_CRYPT_IDEA = 6
LO_CRYPT_DUMMY = 9
LO_CRYPT_SKIPJACK = 10
LO_CRYPT_CRYPTOAPI = 18
MAX_LO_CRYPT = 20
LOOP_SET_FD = 0x4C00
LOOP_CLR_FD = 0x4C01
LOOP_SET_STATUS = 0x4C02
LOOP_GET_STATUS = 0x4C03
LOOP_SET_STATUS64 = 0x4C04
LOOP_GET_STATUS64 = 0x4C05
LOOP_CHANGE_FD = 0x4C06
LOOP_SET_CAPACITY = 0x4C07
LOOP_SET_DIRECT_IO = 0x4C08
LOOP_SET_BLOCK_SIZE = 0x4C09
LOOP_CONFIGURE = 0x4C0A
LOOP_CTL_ADD = 0x4C80
LOOP_CTL_REMOVE = 0x4C81
LOOP_CTL_GET_FREE = 0x4C82
LOOP_SET_STATUS_SETTABLE_FLAGS = LO_FLAGS_AUTOCLEAR | LO_FLAGS_PARTSCAN
LOOP_SET_STATUS_CLEARABLE_FLAGS = LO_FLAGS_AUTOCLEAR
LOOP_CONFIGURE_SETTABLE_FLAGS = LO_FLAGS_READ_ONLY | LO_FLAGS_AUTOCLEAR | LO_FLAGS_PARTSCAN | LO_FLAGS_DIRECT_IO
class LoopInfo64(ctypes.Structure):
_fields_ = [
("lo_device", ctypes.c_uint64),
("lo_inode", ctypes.c_uint64),
("lo_rdevice", ctypes.c_uint64),
("lo_offset", ctypes.c_uint64),
("lo_sizelimit", ctypes.c_uint64),
("lo_number", ctypes.c_uint32),
("lo_encrypt_type", ctypes.c_uint32),
("lo_encrypt_key_size", ctypes.c_uint32),
("lo_flags", ctypes.c_uint32),
("lo_file_name", ctypes.c_char * LO_NAME_SIZE),
("lo_crypt_name", ctypes.c_char * LO_NAME_SIZE),
("lo_encrypt_key", ctypes.c_byte * LO_KEY_SIZE),
("lo_init", ctypes.c_uint64 * 2),
]
class LoopConfig(ctypes.Structure):
_fields_ = [
("fd", ctypes.c_uint32),
("block_size", ctypes.c_uint32),
("info", LoopInfo64),
("__reserved", ctypes.c_uint64 * 8),
]
def loop_get_free_no() -> int:
ctrl = os.open("/dev/loop-control", os.O_RDWR)
try:
no = fcntl.ioctl(ctrl, LOOP_CTL_GET_FREE)
if no < 0: raise OSError("LOOP_CTL_GET_FREE failed")
finally: os.close(ctrl)
return no
def loop_get_free() -> str:
no = loop_get_free_no()
return f"/dev/loop{no}"
def loop_create_dev(no: int, dev: str = None) -> str:
if dev is None:
dev = f"/dev/loop{no}"
if not os.path.exists(dev):
if no < 0: raise ValueError("no loop number set")
a_mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IFBLK
a_dev = os.makedev(7, no)
os.mknod(dev, a_mode, a_dev)
return dev
def loop_detach(dev: str):
loop = os.open(dev, os.O_RDWR)
try:
ret = fcntl.ioctl(loop, LOOP_CLR_FD)
if ret != 0: raise OSError(f"detach loop device {dev} failed")
finally: os.close(loop)
def loop_setup(
path: str = None,
fio: io.FileIO = None,
fd: int = -1,
dev: str = None,
no: int = -1,
offset: int = 0,
size: int = 0,
block_size: int = 512,
read_only: bool = False,
part_scan: bool = False,
auto_clear: bool = False,
direct_io: bool = False,
) -> str:
if path is None and fio is None and fd < 0:
raise ValueError("no source file set")
if no < 0:
if dev is None:
dev = loop_get_free()
else:
fn = os.path.basename(dev)
if fn.startswith("loop"): no = int(fn[4:])
loop_create_dev(no=no, dev=dev)
opened, loop = -1, -1
if fio:
if fd < 0: fd = fio.fileno()
if path is None: path = fio.name
elif fd >= 0:
if path is None: path = utils.fd_get_path(fd)
if path is None: raise OSError("bad fd for loop")
elif path:
path = os.path.realpath(path)
opened = os.open(path, os.O_RDWR)
if opened < 0: raise OSError(f"open {path} failed")
fd = opened
else: raise ValueError("no source file set")
flags = 0
if part_scan: flags |= LO_FLAGS_PARTSCAN
if direct_io: flags |= LO_FLAGS_DIRECT_IO
if read_only: flags |= LO_FLAGS_READ_ONLY
if auto_clear: flags |= LO_FLAGS_AUTOCLEAR
try:
file_name = path[0:63].encode()
li = LoopInfo64(
lo_flags=flags,
lo_offset=offset,
lo_sizelimit=size,
lo_file_name=file_name,
)
lc = LoopConfig(fd=fd, block_size=block_size, info=li)
loop = os.open(dev, os.O_RDWR)
if loop < 0: raise OSError(f"open loop device {dev} failed")
ret = fcntl.ioctl(loop, LOOP_CONFIGURE, lc)
if ret != 0: raise OSError(f"configure loop device {dev} with {path} failed")
finally:
if loop >= 0: os.close(loop)
if opened >= 0: os.close(opened)
return dev
def loop_get_sysfs(dev: str) -> str:
st = os.stat(dev)
if not stat.S_ISBLK(st.st_mode):
raise ValueError(f"device {dev} is not block")
major = os.major(st.st_rdev)
minor = os.minor(st.st_rdev)
if major != 7:
raise ValueError(f"device {dev} is not loop")
sysfs = f"/sys/dev/block/{major}:{minor}"
if not os.path.exists(sysfs):
raise RuntimeError("get sysfs failed")
return sysfs
def loop_get_backing(dev: str) -> str:
sysfs = loop_get_sysfs(dev)
path = os.path.join(sysfs, "loop", "backing_file")
with open(path, "r") as f:
backing = f.read()
return os.path.realpath(backing.strip())
def loop_get_offset(dev: str) -> int:
sysfs = loop_get_sysfs(dev)
path = os.path.join(sysfs, "loop", "offset")
with open(path, "r") as f:
backing = f.read()
return int(backing.strip())
class LoopDevice:
device: str
def __init__(
self,
path: str = None,
fio: io.FileIO = None,
fd: int = -1,
dev: str = None,
no: int = -1,
offset: int = 0,
size: int = 0,
block_size: int = 512,
read_only: bool = False,
part_scan: bool = False,
auto_clear: bool = False,
direct_io: bool = False,
):
self.device = loop_setup(
path=path,
fio=fio,
fd=fd,
dev=dev,
no=no,
offset=offset,
size=size,
block_size=block_size,
read_only=read_only,
part_scan=part_scan,
auto_clear=auto_clear,
direct_io=direct_io,
)
def __del__(self):
loop_detach(self.device)

378
builder/lib/mount.py Normal file
View File

@ -0,0 +1,378 @@
import io
import os
import libmount
from typing import Self
from logging import getLogger
from builder.lib.blkid import Blkid
from builder.lib.serializable import SerializableDict,SerializableList
log = getLogger(__name__)
virtual_fs = [
"sysfs", "tmpfs", "proc", "cgroup", "cgroup2", "hugetlbfs",
"devtmpfs", "binfmt_misc", "configfs", "debugfs", "tracefs", "cpuset",
"securityfs", "sockfs", "bpf", "pipefs", "ramfs", "binder", "bdev",
"devpts", "autofs", "efivarfs", "mqueue", "resctrl", "pstore", "fusectl",
]
real_fs = [
"reiserfs", "ext4", "ext3", "ext2", "cramfs", "squashfs", "minix", "vfat",
"msdos", "exfat", "iso9660", "hfsplus", "gfs2meta", "ecryptfs", "ntfs3", "ufs",
"jffs2", "ubifs", "affs", "romfs", "ocfs2_dlmfs", "omfs", "jfs", "xfs", "nilfs2",
"befs", "ocfs2", "btrfs", "hfs", "gfs2", "udf", "f2fs", "bcachefs", "erofs",
]
class MountPoint(SerializableDict):
device: str = None
source: str = None
target: str = None
fstype: str = None
option: list[str] = []
fs_freq: int = 0
fs_passno: int = 0
@property
def virtual(self) -> bool:
if self.fstype:
if self.fstype in virtual_fs: return True
if self.fstype in real_fs: return False
if self.device:
if self.device.startswith(os.sep): return False
if self.source:
if self.source.startswith(os.sep): return False
if "=" in self.source: return False
return True
@property
def level(self) -> int:
if self.target is None: return 0
path = os.path.realpath(self.target)
cnt = path.count(os.sep)
if (
path.startswith(os.sep) and
not path.endswith(os.sep)
): cnt += 1
return cnt
@property
def options(self):
return ",".join(self.option)
@options.setter
def options(self, val: str):
self.option = val.split(",")
def get_option(self, opt: str) -> str | None:
if opt in self.option:
return opt
if "=" not in opt:
start = f"{opt}="
values = (o for o in self.option if o.startswith(start))
return next(values, None)
return None
def remove_option(self, opt: str | list[str]) -> Self:
if type(opt) is list[str]:
for o in opt:
self.remove_option(o)
return
if opt in self.option:
self.option.remove(opt)
return
if "=" in opt: opt = opt[:opt.find("=")]
val = self.get_option(opt)
if val:
self.remove_option(val)
return self
def exclusive_option(self, opt: str, opt1: str, opt2: str) -> Self:
if opt == opt1 or opt == opt2:
self.remove_option(opt1)
return self
def add_option(self, opt: str) -> Self:
self.exclusive_option(opt, "ro", "rw")
self.exclusive_option(opt, "dev", "nodev")
self.exclusive_option(opt, "suid", "nosuid")
self.exclusive_option(opt, "exec", "noexec")
self.exclusive_option(opt, "relatime", "noatime")
self.remove_option(opt)
if opt not in self.option:
self.option.append(opt)
return self
def ro(self) -> Self:
self.add_option("ro")
return self
def rw(self) -> Self:
self.add_option("rw")
return self
def have_source(self) -> bool: return self.source and self.source != "none"
def have_target(self) -> bool: return self.target and self.target != "none"
def have_fstype(self) -> bool: return self.fstype and self.fstype != "none"
def have_options(self) -> bool: return len(self.option) > 0
def update_device(self):
if self.virtual or self.source is None: return
if self.source.startswith(os.sep):
self.device = self.source
return
if "=" in self.source:
self.device = Blkid().evaluate_tag(self.source)
return
def persist_source(self, tag: str = "UUID"):
if self.virtual: return
if self.device is None: self.update_device()
if self.device is None: return
tag = tag.upper()
if tag == "PATH":
self.source = self.device
return
self.source = Blkid().get_tag_value(
None, tag, self.device
)
def tolibmount(self) -> libmount.Context:
mnt = libmount.Context()
mnt.target = self.target
if self.have_source(): mnt.source = self.source
if self.have_fstype(): mnt.fstype = self.fstype
if self.have_options(): mnt.options = self.options
return mnt
def ismount(self) -> bool:
return os.path.ismount(self.target)
def mount(self) -> Self:
if not os.path.exists(self.target):
os.makedirs(self.target, mode=0o0755)
if not os.path.ismount(self.target):
log.debug(
f"try mount {self.source} "
f"to {self.target} "
f"as {self.fstype} "
f"with {self.options}"
)
lib = self.tolibmount()
lib.mount()
return self
def umount(self) -> Self:
if os.path.ismount(self.target):
lib = self.tolibmount()
lib.umount()
log.debug(f"umount {self.target} successfuly")
return self
def from_mount_line(self, line: str) -> Self:
d = line.split()
if len(d) != 6:
raise ValueError("bad mount line")
self.source = d[0]
self.target = d[1]
self.fstype = d[2]
self.options = d[3]
self.fs_freq = int(d[4])
self.fs_passno = int(d[5])
return self
def to_mount_line(self) -> str:
self.fixup()
fields = [
self.source,
self.target,
self.fstype,
self.options,
str(self.fs_freq),
str(self.fs_passno),
]
return " ".join(fields)
def fixup(self) -> Self:
if not self.have_source(): self.source = "none"
if not self.have_target(): self.target = "none"
if not self.have_fstype(): self.fstype = "none"
if not self.have_options(): self.options = "defaults"
return self
def clone(self) -> Self:
mnt = MountPoint()
mnt.device = self.device
mnt.source = self.source
mnt.target = self.target
mnt.fstype = self.fstype
mnt.option = self.option
mnt.fs_freq = self.fs_freq
mnt.fs_passno = self.fs_passno
return mnt
def __init__(
self,
data: dict = None,
device: str = None,
source: str = None,
target: str = None,
fstype: str = None,
options: str = None,
option: list[str] = None,
fs_freq: int = None,
fs_passno: int = None,
):
super().__init__()
self.device = None
self.source = None
self.target = None
self.fstype = None
self.option = []
self.fs_freq = 0
self.fs_passno = 0
if data: self.from_dict(data)
if device: self.device = device
if source: self.source = source
if target: self.target = target
if fstype: self.fstype = fstype
if options: self.options = options
if option: self.option = option
if fs_freq: self.fs_freq = fs_freq
if fs_passno: self.fs_passno = fs_passno
@staticmethod
def parse_mount_line(line: str):
return MountPoint().from_mount_line(line)
class MountTab(list[MountPoint], SerializableList):
def find_folder(self, folder: str) -> Self:
root = os.path.realpath(folder)
return [mnt for mnt in self if mnt.target.startswith(root)]
def find_target(self, target: str) -> Self: return [mnt for mnt in self if mnt.target == target]
def find_source(self, source: str) -> Self: return [mnt for mnt in self if mnt.source == source]
def find_fstype(self, fstype: str) -> Self: return [mnt for mnt in self if mnt.fstype == fstype]
def clone(self) -> Self:
mnts = MountTab()
for mnt in self:
mnts.append(mnt.clone())
return mnts
def mount_all(self, prefix: str = None, mkdir: bool = False) -> Self:
for mnt in self:
m = mnt.clone()
if prefix:
if m.target == "/": m.target = prefix
else: m.target = os.path.join(prefix, m.target[1:])
if mkdir and not os.path.exists(m.target):
os.makedirs(m.target, mode=0o0755)
m.mount()
return self
def resort(self):
self.sort(key=lambda x: (x.level, len(x.target), x.target))
def strip_virtual(self) -> Self:
for mnt in self:
if mnt.virtual:
self.remove(mnt)
return self
def to_list(self) -> list:
return self
def from_list(self, o: list) -> Self:
self.clear()
for i in o: self.append(MountPoint().from_dict(i))
return self
def to_mount_file(self, linesep=os.linesep) -> str:
ret = "# Source Target FS-Type Options FS-Freq FS-Dump"
ret += linesep
for point in self:
ret += point.to_mount_line()
ret += linesep
return ret
def write_mount_file(self, fp: io.TextIOWrapper):
fp.write(self.to_mount_file())
fp.flush()
def create_mount_file(self, path: str) -> Self:
with open(path, "w") as f:
self.write_mount_file(f)
return self
def load_mount_fp(self, fp: io.TextIOWrapper) -> Self:
for line in fp:
if line is None: break
line = line.strip()
if len(line) <= 0: continue
if line.startswith("#"): continue
mnt = MountPoint.parse_mount_line(line)
self.append(mnt)
return self
def load_mount_file(self, file: str) -> Self:
with open(file, "r") as f:
self.load_mount_fp(f)
return self
def load_fstab(self) -> Self:
self.load_mount_file("/etc/fstab")
return self
def load_mounts(self) -> Self:
self.load_mount_file("/proc/mounts")
return self
def load_mounts_pid(self, pid: int) -> Self:
path = f"/proc/{pid}/mounts"
self.load_mount_file(path)
return self
def from_mount_fp(self, fp: io.TextIOWrapper) -> Self:
self.clear()
self.load_mount_fp(fp)
return self
def from_mount_file(self, file: str) -> Self:
self.clear()
self.load_mount_file(file)
return self
def from_fstab(self, ) -> Self:
self.clear()
self.load_fstab()
return self
def from_mounts(self, ) -> Self:
self.clear()
self.load_mounts()
return self
def from_mounts_pid(self, pid: int) -> Self:
self.clear()
self.load_mounts_pid(pid)
return self
@staticmethod
def parse_mount_fp(fp: io.TextIOWrapper):
return MountTab().from_mount_fp(fp)
@staticmethod
def parse_mount_file(file: str):
return MountTab().from_mount_file(file)
@staticmethod
def parse_fstab():
return MountTab().from_fstab()
@staticmethod
def parse_mounts():
return MountTab().from_mounts()
@staticmethod
def parse_mounts_pid(pid: int):
return MountTab().from_mounts_pid(pid)

106
builder/lib/serializable.py Normal file
View File

@ -0,0 +1,106 @@
from typing import Self
class Serializable:
def serialize(self) -> None | bool | int | float | str | tuple | list | dict: pass
def unserialize(self, value: None | bool | int | float | str | tuple | list | dict): pass
def to_json(
self, *,
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
cls=None,
indent=None,
separators=None,
default=None,
sort_keys=False,
**kw
) -> str:
from builder.lib.json import dumps
return dumps(
self.serialize(),
skipkeys=skipkeys,
ensure_ascii=ensure_ascii,
check_circular=check_circular,
allow_nan=allow_nan,
cls=cls,
indent=indent,
separators=separators,
default=default,
sort_keys=sort_keys,
**kw
)
def to_yaml(self) -> str:
from yaml import safe_dump_all
return safe_dump_all(self.serialize())
@property
def class_path(self) -> str:
ret = self.__class__.__module__ or ""
if len(ret) > 0: ret += "."
ret += self.__class__.__qualname__
return ret
def __str__(self) -> str:
j = self.to_json(indent=2).strip()
return f"{self.class_path}({j})"
def __repr__(self) -> str:
j = self.to_json().strip()
return f"{self.class_path}({j})"
class SerializableDict(Serializable):
def to_dict(self) -> dict:
ret = {}
for key in dir(self):
val = getattr(self, key)
if key.startswith("__"): continue
if key.endswith("__"): continue
if callable(val): continue
ret[key] = val
return ret
def from_dict(self, o: dict) -> Self:
for key in o:
val = o[key]
if key.startswith("__"): continue
if key.endswith("__"): continue
if callable(val): continue
setattr(self, key, val)
return self
def serialize(self) -> dict:
return self.to_dict()
def unserialize(self, value: dict):
self.from_dict(value)
def __dict__(self) -> dict:
return self.to_dict()
def __init__(self, o: dict = None):
if o: self.from_dict(o)
class SerializableList(Serializable):
def to_list(self) -> list:
pass
def from_list(self, o: list) -> Self:
pass
def serialize(self) -> list:
return self.to_list()
def unserialize(self, value: list):
self.from_list(value)
def __list__(self) -> list:
return self.to_list()
def __init__(self, o: list = None):
if o: self.from_list(o)

261
builder/lib/shadow.py Normal file
View File

@ -0,0 +1,261 @@
import io
from typing import Self
from builder.lib.serializable import SerializableDict, SerializableList
def zero2empty(num: int) -> str:
return str(num) if num !=0 else ""
def none2empty(val: str) -> str:
return val if val else ""
class UserEntry(SerializableDict):
name: str = None
def from_line(self, line: str): pass
def to_line(self) -> str: pass
class UserFile(SerializableList):
def load_line(self, line: str): pass
def unload(self): pass
def load_str(self, content: str | list[str]) -> Self:
if type(content) is str:
content = content.split("\n")
for line in content:
line = line.strip()
if line.startswith("#"): continue
if len(line) <= 0: continue
self.load_line(line)
return self
def load_fp(self, fp: io.TextIOWrapper) -> Self:
self.load_str(fp.readlines())
return self
def load_file(self, file: str) -> Self:
with open(file, "r") as f:
self.load_fp(f)
return self
def from_str(self, content: str) -> Self:
self.unload()
self.load_str(content)
return self
def from_fp(self, fp: io.TextIOWrapper) -> Self:
self.unload()
self.load_fp(fp)
return self
def from_file(self, file: str) -> Self:
self.unload()
self.load_file(file)
return self
class ShadowEntry(UserEntry):
name: str = None
password: str = None
last_change: int = 0
min_age: int = 0
max_age: int = 0
warning_period: int = 0
inactivity_period: int = 0
expiration: int = 0
def from_line(self, line: str):
values = line.split(":")
if len(values) != 8:
raise ValueError("fields mismatch")
self.name = values[0]
self.password = values[1]
self.last_change = int(values[2]) if len(values[2]) > 0 else 0
self.min_age = int(values[3]) if len(values[3]) > 0 else 0
self.max_age = int(values[4]) if len(values[4]) > 0 else 0
self.warning_period = int(values[5]) if len(values[5]) > 0 else 0
self.inactivity_period = int(values[6]) if len(values[6]) > 0 else 0
self.expiration = int(values[7]) if len(values[7]) > 0 else 0
def to_line(self) -> str:
values = [
none2empty(self.name),
none2empty(self.password),
zero2empty(self.last_change),
zero2empty(self.min_age),
zero2empty(self.max_age),
zero2empty(self.warning_period),
zero2empty(self.inactivity_period),
zero2empty(self.expiration),
]
return (":".join(values)) + "\n"
class GshadowEntry(UserEntry):
name: str = None
password: str = None
admins: list[str] = None
members: list[str] = None
@property
def admin(self):
return ",".join(self.admins)
@admin.setter
def admin(self, val: str):
self.admins = val.split(",")
@property
def member(self):
return ",".join(self.members)
@member.setter
def member(self, val: str):
self.members = val.split(",")
def from_line(self, line: str):
values = line.split(":")
if len(values) != 4:
raise ValueError("fields mismatch")
self.name = values[0]
self.password = values[1]
self.admin = values[2]
self.member = values[3]
def to_line(self) -> str:
values = [
none2empty(self.name),
none2empty(self.password),
none2empty(self.admin),
none2empty(self.member),
]
return (":".join(values)) + "\n"
class PasswdEntry(UserEntry):
name: str = None
password: str = None
uid: int = -1
gid: int = -1
comment: str = None
home: str = None
shell: str = None
def from_line(self, line: str):
values = line.split(":")
if len(values) != 7:
raise ValueError("fields mismatch")
self.name = values[0]
self.password = values[1]
self.uid = int(values[2])
self.gid = int(values[3])
self.comment = values[4]
self.home = values[5]
self.shell = values[6]
def to_line(self) -> str:
values = [
none2empty(self.name),
none2empty(self.password),
str(self.uid),
str(self.gid),
none2empty(self.comment),
none2empty(self.home),
none2empty(self.shell),
]
return (":".join(values)) + "\n"
class GroupEntry(UserEntry):
name: str = None
password: str = None
gid: int = -1
users: list[str] = None
@property
def user(self):
return ",".join(self.users)
@user.setter
def user(self, val: str):
self.users = val.split(",")
def from_line(self, line: str):
values = line.split(":")
if len(values) != 4:
raise ValueError("fields mismatch")
self.name = values[0]
self.password = values[1]
self.gid = int(values[2])
self.user = values[3]
def to_line(self) -> str:
values = [
none2empty(self.name),
none2empty(self.password),
str(self.gid),
none2empty(self.user),
]
return (":".join(values)) + "\n"
class ShadowFile(list[ShadowEntry], UserFile):
def unload(self): self.clear()
def load_line(self, line: str):
ent = ShadowEntry()
ent.from_line(line)
self.append(ent)
def lookup_name(self, name: str) -> ShadowEntry:
return next((e for e in self if e.name == name), None)
class GshadowFile(list[GshadowEntry], UserFile):
def unload(self): self.clear()
def load_line(self, line: str):
ent = GshadowEntry()
ent.from_line(line)
self.append(ent)
def lookup_name(self, name: str) -> GshadowEntry:
return next((e for e in self if e.name == name), None)
class PasswdFile(list[PasswdEntry], UserFile):
def unload(self): self.clear()
def load_line(self, line: str):
ent = PasswdEntry()
ent.from_line(line)
self.append(ent)
def lookup_name(self, name: str) -> PasswdEntry:
return next((e for e in self if e.name == name), None)
def lookup_uid(self, uid: int) -> PasswdEntry:
return next((e for e in self if e.uid == uid), None)
def lookup_gid(self, gid: int) -> PasswdEntry:
return next((e for e in self if e.gid == gid), None)
class GroupFile(list[GroupEntry], UserFile):
def unload(self): self.clear()
def load_line(self, line: str):
ent = GroupEntry()
ent.from_line(line)
self.append(ent)
def lookup_name(self, name: str) -> GroupEntry:
return next((e for e in self if e.name == name), None)
def lookup_gid(self, gid: int) -> GroupEntry:
return next((e for e in self if e.gid == gid), None)

143
builder/lib/subscript.py Normal file
View File

@ -0,0 +1,143 @@
from builder.lib.utils import str_find_all
from logging import getLogger
log = getLogger(__name__)
class SubScriptValue:
content: str = None
original: str = None
incomplete: bool = False
def __str__(self): return self.content
def __repr__(self): return self.content
def dict_get(key: str, root: dict):
def get_token(node, k):
nt = type(node)
if nt is list: return node[int(k)]
elif nt is tuple: return node[int(k)]
elif nt is dict: return node[k]
else: raise KeyError(f"unsupported get in {nt.__name__}")
keys = ["[", "."]
node = root
while len(key) > 0:
if key[0] == "[":
p = key.find("]", 1)
if p < 0: raise ValueError("missing ]")
node = get_token(node, key[1:p])
key = key[p + 1:]
continue
if key[0] == ".":
key = key[1:]
continue
p = str_find_all(key, keys)
k = key[:p] if p >= 0 else key
node = get_token(node, k)
if p < 0: return node
key = key[p + 1:]
return node
class SubScript:
root: dict
resolved: list[str]
unresolved: list[str]
count: int
def resolve_token(self, token: str) -> SubScriptValue:
val = SubScriptValue()
val.original = token
lstr = False
if token[0] == "@":
token = token[1:]
lstr = True
if token not in self.unresolved:
self.unresolved.append(token)
value = dict_get(token, self.root)
if token not in self.resolved:
val.incomplete = True
return val
if lstr:
vt = type(value)
if vt is list: value = " ".join(value)
else: raise ValueError(f"@ not support for {vt.__name__}")
self.unresolved.remove(token)
val.content = value
return val
def process(self, content: str, lvl: str) -> SubScriptValue:
last = 0
ret = SubScriptValue()
ret.original = content
ret.content = content
while last < len(content):
last = content.find("$", last)
if last < 0: break
if content[last:last+2] == "$$":
last += 2
continue
if len(content) <= last + 2 or content[last + 1] != "{":
raise ValueError(f"unexpected token in subscript at {lvl}")
tp = content.find("}", last + 1)
if tp < 0: raise ValueError(f"missing }} in subscript at {lvl}")
token = content[last + 2: tp]
val = self.resolve_token(token)
if val.incomplete:
ret.incomplete = True
return ret
value = val.content
content = content[:last] + value + content[tp + 1:]
last += len(value)
ret.content = content
return ret
def parse_rec(self, node: dict | list, level: str) -> bool:
def process_one(key, lvl):
value = node[key]
vt = type(value)
if vt is dict or vt is list:
if not self.parse_rec(value, lvl):
return False
elif vt is str:
val = self.process(value, lvl)
if val.incomplete:
return False
node[key] = val.content
self.resolved.append(lvl)
self.count += 1
return True
ret = True
nt = type(node)
if nt is dict:
for key in node:
lvl = f"{level}.{key}" if len(level) > 0 else key
if lvl in self.resolved: continue
if not process_one(key, lvl): ret = False
elif nt is list or nt is tuple:
for idx in range(len(node)):
lvl = f"{level}[{idx}]"
if lvl in self.resolved: continue
if not process_one(idx, lvl): ret = False
else: raise ValueError(f"unknown input value at {level}")
return ret
def dump_unresolved(self):
for key in self.unresolved:
log.warning(f"value {key} unresolved")
def parse(self, root: dict):
self.root = root
while True:
self.count = 0
ret = self.parse_rec(root, "")
if ret: break
if self.count <= 0:
self.dump_unresolved()
raise ValueError("some value cannot be resolved")
self.dump_unresolved()
def __init__(self):
self.resolved = []
self.unresolved = []
self.count = 0

129
builder/lib/utils.py Normal file
View File

@ -0,0 +1,129 @@
import os
import io
import shlex
import shutil
import typing
from logging import getLogger
log = getLogger(__name__)
def str_find_all(
orig: str,
keys: list[str] | tuple[str] | str,
start: typing.SupportsIndex | None = None,
end: typing.SupportsIndex | None = None,
) -> int:
if type(keys) is str: return orig.find(keys, start, end)
result: list[int] = [orig.find(key, start, end) for key in keys]
while -1 in result: result.remove(-1)
return min(result, default=-1)
def parse_cmd_args(cmd: str|list[str]) -> list[str]:
if type(cmd) is str: return shlex.split(cmd)
elif type(cmd) is list: return cmd
else: raise TypeError("unknown type for cmd")
def find_external(name: str) -> str:
"""
Find a linux executable path
find_external("systemctl") = "/usr/bin/systemctl"
find_external("service") = None
"""
return shutil.which(name)
def have_external(name: str) -> bool:
"""
Is a command in PATH
find_external("systemctl") = True
find_external("service") = False
"""
return shutil.which(name) is not None
def fd_get_path(fd: int) -> str | None:
"""
Get file path by FD
fd_get_path(1) = "/dev/pts/0"
"""
link = f"/proc/self/fd/{fd}"
if not os.path.exists(link): return None
path = os.readlink(link)
if not path.startswith("/"): return None
if path.startswith("/memfd:"): return None
if path.endswith(" (deleted)"): return None
if not os.path.exists(path): return None
return path
def size_to_bytes(value: str | int, alt_units: dict = None) -> int:
units = {
'b': 0.125, 'bit': 0.125, 'bits': 0.125, 'Bit': 0.125, 'Bits': 0.125,
'B': 1, 'Byte': 1, 'Bytes': 1, 'bytes': 1, 'byte': 1,
'k': 10**3, 'kB': 10**3, 'kb': 10**3, 'K': 2**10, 'KB': 2**10, 'KiB': 2**10,
'm': 10**6, 'mB': 10**6, 'mb': 10**6, 'M': 2**20, 'MB': 2**20, 'MiB': 2**20,
'g': 10**9, 'gB': 10**9, 'gb': 10**9, 'G': 2**30, 'GB': 2**30, 'GiB': 2**30,
't': 10**12, 'tB': 10**12, 'tb': 10**12, 'T': 2**40, 'TB': 2**40, 'TiB': 2**40,
'p': 10**15, 'pB': 10**15, 'pb': 10**15, 'P': 2**50, 'PB': 2**50, 'PiB': 2**50,
'e': 10**15, 'eB': 10**15, 'eb': 10**15, 'E': 2**50, 'EB': 2**50, 'EiB': 2**50,
'z': 10**15, 'zB': 10**15, 'zb': 10**15, 'Z': 2**50, 'ZB': 2**50, 'ZiB': 2**50,
'y': 10**15, 'yB': 10**15, 'yb': 10**15, 'Y': 2**50, 'YB': 2**50, 'YiB': 2**50,
}
if type(value) is int: return value
elif type(value) is str:
if alt_units: units.update(alt_units)
matches = {unit: len(unit) for unit in units if value.endswith(unit)}
max_unit = max(matches.values(), default=0)
unit = next((unit for unit in matches.keys() if matches[unit] == max_unit), None)
mul = units[unit] if unit else 1.0
return int(float(value[:len(value)-max_unit].strip()) * mul)
else: raise TypeError("bad size value")
def bytes_pad(b: bytes, size: int, trunc: bool = False, pad: bytes = b'\0') -> bytes:
l = len(b)
if l > size and trunc: b = b[:size]
if l < size: b += pad * (size - l)
return b
def round_up(value: int, align: int) -> int:
return (value + align - 1) & ~(align - 1)
def round_down(value: int, align: int) -> int:
return value & ~(align - 1)
def open_config(path: str, mode=0o0644) -> io.TextIOWrapper:
dist = f"{path}.dist"
have_dist = False
if os.path.exists(dist):
have_dist = True
elif os.path.exists(path):
shutil.move(path, dist)
have_dist = True
flags = os.O_RDWR | os.O_CREAT | os.O_TRUNC
fd = os.open(path=path, flags=flags, mode=mode)
if fd < 0: raise IOError(f"open {path} failed")
try:
fp = os.fdopen(fd, "w")
fp.write("# This file is auto generated by arch-image-builder\n")
if have_dist:
fn = os.path.basename(dist)
fp.write(f"# Original file is {fn}\n")
fp.write("\n")
fp.flush()
except:
os.close(fd)
raise
return fp
def path_to_name(path: str) -> str:
if path == "/": return "rootfs"
if path.startswith("/"): path = path[1:]
if len(path) <= 0: return "empty"
return path.replace("/", "-")

77
builder/main.py Normal file
View File

@ -0,0 +1,77 @@
import os
import logging
from sys import stdout
from locale import setlocale, LC_ALL
from argparse import ArgumentParser
from builder.build import bootstrap
from builder.lib import config, utils
from builder.lib.context import ArchBuilderContext
log = logging.getLogger(__name__)
def parse_arguments(ctx: ArchBuilderContext):
parser = ArgumentParser(
prog="arch-image-builder",
description="Build flashable image for Arch Linux",
)
parser.add_argument("-c", "--config", help="Select config to build", required=True, action='append')
parser.add_argument("-o", "--workspace", help="Set workspace for builder", default=ctx.work)
parser.add_argument("-d", "--debug", help="Enable debug logging", default=False, action='store_true')
parser.add_argument("-G", "--no-gpgcheck", help="Disable GPG check", default=False, action='store_true')
parser.add_argument("-r", "--repack", help="Repack rootfs only", default=False, action='store_true')
args = parser.parse_args()
# debug logging
if args.debug:
logging.root.setLevel(logging.DEBUG)
log.debug("enabled debug logging")
if args.no_gpgcheck: ctx.gpgcheck = False
if args.repack: ctx.repack = True
# collect configs path
configs = []
for conf in args.config:
configs.extend(conf.split(","))
# load and populate configs
config.load_configs(ctx, configs)
config.populate_config(ctx)
# build folder: {TOP}/build/{TARGET}
ctx.work = os.path.realpath(os.path.join(args.workspace, ctx.target))
def init_environment():
# set user agent for pacman (some mirrors requires)
os.environ["HTTP_USER_AGENT"] = "arch-image-builder(pacman) pyalpm"
# set to default language to avoid problems
os.environ["LANG"] = "C"
os.environ["LANGUAGE"] = "C"
os.environ["LC_ALL"] = "C"
setlocale(LC_ALL, "C")
def check_system():
# why not root?
if os.getuid() != 0:
raise PermissionError("this tool can only run as root")
# always need pacman
if not utils.have_external("pacman"):
raise FileNotFoundError("pacman not found")
def main():
logging.basicConfig(stream=stdout, level=logging.INFO)
check_system()
init_environment()
ctx = ArchBuilderContext()
ctx.dir = os.path.realpath(os.path.join(os.path.dirname(__file__), os.path.pardir))
ctx.work = os.path.realpath(os.path.join(ctx.dir, "build"))
parse_arguments(ctx)
log.info(f"source tree folder: {ctx.dir}")
log.info(f"workspace folder: {ctx.work}")
log.info(f"build target name: {ctx.target}")
bootstrap.build_rootfs(ctx)

View File

@ -0,0 +1,8 @@
sysconf:
user:
- name: alarm
password: alarm
groups: wheel
- name: root
uid: 0
password: root

View File

@ -0,0 +1,8 @@
sysconf:
user:
- name: arch
password: arch
groups: wheel
- name: root
uid: 0
password: root

View File

@ -0,0 +1,16 @@
filesystem:
files:
- path: /etc/polkit-1/rules.d/99-wheel.rules
mode: 0640
content: |
polkit.addRule(function(action,subject){
if(subject.isInGroup("wheel"))
return polkit.Result.YES;
});
- path: /etc/sudoers.d/wheel
mode: 0640
content: |
%wheel ALL=(ALL:ALL) NOPASSWD: ALL
pacman:
install:
- sudo

0
configs/custom/.gitkeep Normal file
View File

View File

@ -0,0 +1,9 @@
pacman:
install:
- gnome
systemd:
default: graphical.target
enable:
- gdm.service
+also:
- packages/network-manager

View File

@ -0,0 +1,79 @@
name: AYN Odin 2
arch: aarch64
soc: qcs8550
device: ayn-odin2
device_suffix: -hypdtbo
pacman:
install:
- wireless-regdb
- linux-firmware-ayn-odin2
- linux-ayn-odin2-edge
- mesa-qcom-git
- vulkan-tools
- xcb-util-keysyms
systemd:
disable:
- rmtfs.service
filesystem:
files:
- path: /etc/udev/rules.d/99-${device}.rules
content: |
SUBSYSTEM=="input", ATTRS{name}=="Ayn Odin2 Gamepad", MODE="0666", ENV{ID_INPUT_MOUSE}="0", ENV{ID_INPUT_JOYSTICK}="1"
- path: /etc/systemd/logind.conf.d/power-key-lock.conf
content: |
[Login]
HandlePowerKey=lock
- path: /etc/systemd/system.conf.d/show-status.conf
content: |
[Manager]
ShowStatus=yes
- path: /etc/systemd/sleep.conf.d/no-suspend.conf
content: |
[Sleep]
AllowSuspend=no
AllowHibernation=no
AllowSuspendThenHibernate=no
- path: /etc/systemd/resolved.conf.d/no-mdns.conf
content: |
[Resolve]
MulticastDNS=no
LLMNR=no
sysconf:
chassis: handset
environments:
__GLX_VENDOR_LIBRARY_NAME: mesa
MESA_LOADER_DRIVER_OVERRIDE: zink
GALLIUM_DRIVER: zink
kernel:
cmdline:
- clk_ignore_unused
- pd_ignore_unused
- panic=30
- loglevel=8
- allow_mismatched_32bit_el0
mkinitcpio:
files:
- /usr/lib/firmware/qcom/sm8550/ayn/odin2/adsp.mbn
- /usr/lib/firmware/qcom/sm8550/ayn/odin2/adsp_dtb.mbn
- /usr/lib/firmware/qcom/sm8550/ayn/odin2/cdsp.mbn
- /usr/lib/firmware/qcom/sm8550/ayn/odin2/cdsp_dtb.mbn
- /usr/lib/firmware/ath12k/WCN7850/hw2.0/amss.bin
- /usr/lib/firmware/ath12k/WCN7850/hw2.0/regdb.bin
- /usr/lib/firmware/ath12k/WCN7850/hw2.0/board-2.bin
- /usr/lib/firmware/ath12k/WCN7850/hw2.0/m3.bin
- /usr/lib/firmware/qca/hmtbtfw20.tlv
- /usr/lib/firmware/qca/hmtnv20.bin
- /usr/lib/firmware/qcom/sm8550/ayn/odin2/a740_zap.mbn
- /usr/lib/firmware/qcom/gmu_gen70200.bin
- /usr/lib/firmware/qcom/a740_sqe.fw
- /usr/lib/firmware/regulatory.db.p7s
- /usr/lib/firmware/regulatory.db
+also:
- os/archlinuxarm
- repo/archlinuxcn
- repo/renegade-project
- device/qcom
- packages/systemd-gadget
- packages/openssh
- packages/editor
- packages/bluez

15
configs/device/qcom.yaml Normal file
View File

@ -0,0 +1,15 @@
platform: qcom
device_suffix:
pacman:
install:
- qbootctl
- qrtr
- rmtfs
- tqftpserv
- pd-mapper
systemd:
enable:
- rmtfs.service
- qrtr-ns.service
- pd-mapper.service
- tqftpserv.service

View File

@ -0,0 +1,10 @@
name: Generic x86_64 compatible PC
arch: x86_64
pacman:
install:
- linux
- fastfetch
+also:
- os/archlinux
- packages/openssh
- packages/systemd-networkd

39
configs/locale/zh-cn.yaml Normal file
View File

@ -0,0 +1,39 @@
locale:
enable:
- "zh_CN.UTF-8 UTF-8"
- "en_US.UTF-8 UTF-8"
default: zh_CN.UTF-8
filesystem:
files:
- path: /etc/conf.d/wireless-regdom
content: |
WIRELESS_REGDOM="CN"
- path: /etc/systemd/resolved.conf.d/cn-dns.conf
content: |
[Resolve]
DNS=114.114.114.114 119.29.29.29
FallbackDNS=114.114.114.114 119.29.29.29
- path: /etc/systemd/timesyncd.conf.d/cn-ntp.conf
content: |
[Time]
NTP=cn.ntp.org.cn
pacman:
install:
- noto-fonts-cjk
- wqy-bitmapfont
- wqy-microhei
- wqy-microhei-lite
- wqy-zenhei
- ibus
- ibus-libpinyin
sysconf:
environments:
GTK_IM_MODULE: ibus
QT_IM_MODULE: ibus
XMODIFIERS: '@im=ibus'
COUNTRY: CN
LANG: zh_CN.UTF-8
LANGUAGE: zh_CN.UTF-8
LC_ALL: zh_CN.UTF-8
TZ: Asia/Shanghai
timezone: Asia/Shanghai

11
configs/os/archlinux.yaml Normal file
View File

@ -0,0 +1,11 @@
pacman:
install:
- core/base
sysconf:
hosts:
- 127.0.0.1 localhost
- 127.0.1.1 archlinux
hostname: archlinux
+also:
- repo/archlinux
- common/arch-user

View File

@ -0,0 +1,11 @@
pacman:
install:
- core/base
sysconf:
hosts:
- 127.0.0.1 localhost
- 127.0.1.1 alarm
hostname: alarm
+also:
- repo/archlinuxarm
- common/alarm-user

0
configs/os/manjaro.yaml Normal file
View File

View File

@ -0,0 +1,6 @@
pacman:
install:
- bluez
systemd:
enable:
- bluetooth.service

View File

@ -0,0 +1,9 @@
pacman:
install:
- nano
- less
sysconf:
environments:
EDITOR: nano
VISUAL: nano
PAGER: less

View File

@ -0,0 +1,7 @@
pacman:
install:
- networkmanager
systemd:
enable:
- NetworkManager.service
- systemd-resolved.service

View File

@ -0,0 +1,6 @@
pacman:
install:
- openssh
systemd:
enable:
- sshd.service

View File

@ -0,0 +1,10 @@
pacman:
install:
- systemd-gadget
- dnsmasq
systemd:
disable:
- getty@ttyGS0.service
- usbgadget-func-acm.service
enable:
- systemd-networkd.service

View File

@ -0,0 +1,40 @@
filesystem:
files:
- path: /etc/systemd/network/20-ethernet.network
content: |
[Match]
Name=en*
Name=eth*
[Network]
DHCP=yes
IPv6PrivacyExtensions=yes
[DHCPv4]
RouteMetric=100
[IPv6AcceptRA]
RouteMetric=100
- path: /etc/systemd/network/20-wlan.network
content: |
[Match]
Name=wl*
[Network]
DHCP=yes
IPv6PrivacyExtensions=yes
[DHCPv4]
RouteMetric=600
[IPv6AcceptRA]
RouteMetric=600
- path: /etc/systemd/network/20-wwan.network
content: |
[Match]
Name=ww*
[Network]
DHCP=yes
IPv6PrivacyExtensions=yes
[DHCPv4]
RouteMetric=700
[IPv6AcceptRA]
RouteMetric=700
systemd:
enable:
- systemd-networkd.service
- systemd-resolved.service

View File

@ -0,0 +1,7 @@
pacman:
repo:
- name: arch4edu
priority: 800
server: https://mirrors.bfsu.edu.cn/arch4edu/${arch}
install:
- arch4edu/arch4edu-keyring

View File

@ -0,0 +1,21 @@
pacman:
repo:
- name: core
priority: 100
server: https://mirrors.bfsu.edu.cn/archlinux/core/os/${arch}
- name: extra
priority: 110
server: https://mirrors.bfsu.edu.cn/archlinux/extra/os/${arch}
- name: multilib
priority: 120
server: https://mirrors.bfsu.edu.cn/archlinux/multilib/os/${arch}
trust:
- eworm@archlinux.org
- dvzrv@archlinux.org
- bluewind@archlinux.org
- demize@archlinux.org
- artafinde@archlinux.org
- anthraxx@archlinux.org
- foxboron@archlinux.org
install:
- core/archlinux-keyring

View File

@ -0,0 +1,19 @@
pacman:
repo:
- name: core
priority: 100
server: http://mirrors.bfsu.edu.cn/archlinuxarm/${arch}/core
- name: extra
priority: 110
server: https://mirrors.bfsu.edu.cn/archlinuxarm/${arch}/extra
- name: alarm
priority: 120
server: https://mirrors.bfsu.edu.cn/archlinuxarm/${arch}/alarm
- name: aur
priority: 130
server: https://mirrors.bfsu.edu.cn/archlinuxarm/${arch}/aur
trust:
- builder@archlinuxarm.org
install:
- core/archlinuxarm-keyring
- core/archlinux-keyring

View File

@ -0,0 +1,9 @@
pacman:
repo:
- name: archlinuxcn
priority: 500
server: https://mirrors.bfsu.edu.cn/archlinuxcn/${arch}
trust:
- farseerfc@archlinux.org
install:
- archlinuxcn/archlinuxcn-keyring

View File

@ -0,0 +1,9 @@
pacman:
repo:
- name: blackarch
priority: 1000
server: https://mirrors.bfsu.edu.cn/blackarch/blackarch/os/${arch}
trust:
- noptrix@nullsecurity.net
install:
- blackarch/blackarch-keyring

View File

@ -0,0 +1,9 @@
pacman:
repo:
- name: renegade-project
priority: 200
server: https://mirror.renegade-project.tech/arch/${arch}
trust:
- renegade-project@classfun.cn
install:
- renegade-project/renegade-project-keyring

View File

@ -0,0 +1,77 @@
name: AYN Odin 2 in SD Card
target: ${device}-sdcard
arch: aarch64
pacman:
install:
- grub
- efibootmgr
image:
- type: disk
output: sdcard.img
layout: gpt
size: 6GiB
sector: 512
partitions:
- type: filesystem
ptype: efi
pname: esp
size: 512MiB
fsname: ESP
fstype: fat32
mount: /boot
fstab:
flags: rw,noatime,utf8,errors=remount-ro
- type: filesystem
ptype: linux-root-arm64
pname: linux
fsname: ArchLinuxARM
fstype: ext4
mount: /
grow: yes
fstab:
boot: yes
flags: rw,noatime,discard
fstab:
dev: partlabel
grub:
path: /boot/grub
targets:
- arm64-efi
bootloader:
timeout: 3
method:
- grub
items:
- type: linux
default: yes
name: Arch Linux ARM for AYN Odin 2
path: ${kernel.path}
kernel: /${kernel.kernel}
initramfs: /${kernel.initramfs}
devicetree: /${kernel.devicetree}
cmdline: ${@kernel.cmdline} ro quiet splash
filesystem:
files:
- path: /boot/LinuxLoader.cfg
stage: post-fs
content: |
[LinuxLoader]
Debug = true
Target = "Linux"
MassStorageLUN = 0
DefaultVolUp = "BDS Menu"
UsbHostMode = false
HypUartEnable = false
DisableDisplayHW = true
[Linux]
Image = "${kernel.kernel}"
initrd = "${kernel.initramfs}"
devicetree = "${kernel.devicetree}"
cmdline = "${@kernel.cmdline}"
kernel:
path: /boot
kernel: Image
initramfs: initramfs-linux.img
devicetree: dtbs/${platform}/${soc}-${device}${device_suffix}.dtb
+also:
- device/ayn-odin2

View File

@ -0,0 +1,16 @@
name: AYN Odin 2 in UFS
target: ${device}-ufs
arch: aarch64
image:
- type: filesystem
fstype: ext4
size: 8GiB
sector: 4096
label: ArchLinuxARM
mount: /
fstab:
flags: rw,noatime,utf8,errors=remount-ro
fstab:
dev: partlabel
+also:
- device/ayn-odin2

View File

@ -0,0 +1,72 @@
name: Generic x86_64 compatible PC Dual Boot
target: x86_64-dual
arch: x86_64
pacman:
install:
- grub
- amd-ucode
- intel-ucode
- efibootmgr
image:
- type: disk
output: disk.img
layout: gpt
size: 2GiB
sector: 512
partitions:
- ptype: bios
pname: bios
size: 4MiB
- type: filesystem
ptype: efi
pname: esp
size: 4MiB
fsname: ESP
fstype: fat12
mount: /boot/efi
fstab:
flags: rw,noatime,utf8,errors=remount-ro
- type: filesystem
ptype: linux-root-x86-64
pname: linux
fsname: ArchLinux
fstype: ext4
mount: /
grow: yes
fstab:
boot: yes
flags: rw,noatime,discard
grow: yes
fstab:
dev: partlabel
kernel:
kernel: /boot/vmlinuz-linux
initramfs: /boot/initramfs-linux.img
cmdline:
- add_efi_memmap
grub:
path: /boot/grub
targets:
- x86_64-efi
- i386-efi
- i386-pc
bootloader:
timeout: 3
method:
- grub
items:
- type: linux
default: yes
name: Arch Linux
path: /
kernel: ${kernel.kernel}
initramfs: ${kernel.initramfs}
cmdline: ${@kernel.cmdline} ro quiet splash
- type: linux
name: Arch Linux Fallback
path: /
kernel: ${kernel.kernel}
initramfs: /boot/initramfs-linux-fallback.img
cmdline: ${@kernel.cmdline} rw loglevel=7
+also:
- device/x86_64

0
devices/custom/.gitkeep Normal file
View File

3
requirements.txt Normal file
View File

@ -0,0 +1,3 @@
libarchive-c
pyalpm
yaml