Introduce symtab_create_debug_printf and symtab_create_debug_printf_v,
to print the debug messages enabled by "set debug symtab-create".
Change-Id: I442500903f72d4635c2dd9eaef770111f317dc04
On aarch64-linux I run into this failure with gcc 7.5.0:
...
(gdb) print $item.started^M
$1 = (-5312, 65535, 4202476)^M
(gdb) FAIL: gdb.ada/convvar_comp.exp: print $item.started
...
The test-case expects (0, 0, 0), but we're getting another value due to
incorrect location information.
Work around this by:
- first printing the value, and then
- verifying that the convenience variable matches the printed value.
I've verified that the test-case still checks what it should by disabling
the fix from commit cc0e770c0d ("memory error printing component of record
from convenience variable") and observing the test-case fail.
Tested on x86_64-linux and aarch64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29420
On aarch64 (and likewise on arm), I run into:
...
(gdb) PASS: gdb.threads/killed-outside.exp: get pid of inferior
Executing on target: kill -9 11516 (timeout = 300)
builtin_spawn -ignore SIGHUP kill -9 11516^M
continue^M
Continuing.^M
Unable to fetch general registers: No such process.^M
(gdb) [Thread 0xfffff7d511e0 (LWP 11518) exited]^M
^M
Program terminated with signal SIGKILL, Killed.^M
The program no longer exists.^M
FAIL: gdb.threads/killed-outside.exp: prompt after first continue (timeout)
...
due to a mismatch between the actual "No such process" line and the expected
one:
...
set no_such_process_msg "Couldn't get registers: No such process\."
...
Fix this by updating the regexp.
Tested on aarch64-linux, and x86_64-linux.
The Guile code generally checks to see if an htab is non-null before
destroying it. However, the registry code already ensures this, so we
can change these checks to asserts and simplify the code a little.
The registry code creates "registry_data" objects that hold the free
function and the index; then the registry keys refer to this object.
However, only the index is really useful, and now that registries have
a private implementation, just the index can be stored and we can
reduce the memory use of registries a little bit. This also
simplifies the code somewhat.
This rewrites registry.h, removing all the macros and replacing it
with relatively ordinary template classes. The result is less code
than the previous setup. It replaces large macros with a relatively
straightforward C++ class, and now manages its own cleanup.
The existing type-safe "key" class is replaced with the equivalent
template class. This approach ended up requiring relatively few
changes to the users of the registry code in gdb -- code using the key
system just required a small change to the key's declaration.
All existing users of the old C-like API are now converted to use the
type-safe API. This mostly involved changing explicit deletion
functions to be an operator() in a deleter class.
The old "save/free" two-phase process is removed, and replaced with a
single "free" phase. No existing code used both phases.
The old "free" callbacks took a parameter for the enclosing container
object. However, this wasn't truly needed and is removed here as
well.
When an objfile is destroyed, types that are still in use and
allocated on that objfile are copied. A temporary hash map is created
during this process, and it is allocated on the destroyed objfile's
obstack -- which normally is fine, as that is going to be destroyed
shortly anyway.
However, this approach requires that the objfile be passed to registry
destruction, and this won't be possible in the rewritten registry.
This patch changes the copied type hash table to simply use the heap
instead. It also removes the 'objfile' parameter from
copy_type_recursive, to make this all more clear.
This patch also fixes an apparent bug in copy_type_recursive.
Previously it was copying the dynamic property list to the dying
objfile's obstack:
- = copy_dynamic_prop_list (&objfile->objfile_obstack,
However I think this is incorrect -- that obstack is about to be
destroyed.
This changes address_space to use new and delete, and makes some other
small C++-ification changes as well, like changing address_space_num
to be a method.
This patch was needed for the subsequent patch to rewrite the registry
system.
PR python/18385
v7:
This version addresses the issues pointed out by Tom.
Added nullchecks for Python object creations.
Changed from using PyLong_FromLong to the gdb_py-versions.
Re-factored some code to make it look more cohesive.
Also added the more safe Python reference count decrement PY_XDECREF,
even though the BreakpointLocation type is never instantiated by the
user (explicitly documented in the docs) decrementing < 0 is made
impossible with the safe call.
Tom pointed out that using the policy class explicitly to decrement a
reference counted object was not the way to go, so this has instead been
wrapped in a ref_ptr that handles that for us in blocpy_dealloc.
Moved macro from py-internal to py-breakpoint.c.
Renamed section at the bottom of commit message "Patch Description".
v6:
This version addresses the points Pedro gave in review to this patch.
Added the attributes `function`, `fullname` and `thread_groups`
as per request by Pedro with the argument that it more resembles the
output of the MI-command "-break-list". Added documentation for these attributes.
Cleaned up left overs from copy+paste in test suite, removed hard coding
of line numbers where possible.
Refactored some code to use more c++-y style range for loops
wrt to breakpoint locations.
Changed terminology, naming was very inconsistent. Used a variety of "parent",
"owner". Now "owner" is the only term used, and the field in the
gdb_breakpoint_location_object now also called "owner".
v5:
Changes in response to review by Tom Tromey:
- Replaced manual INCREF/DECREF calls with
gdbpy_ref ptrs in places where possible.
- Fixed non-gdb style conforming formatting
- Get parent of bploc increases ref count of parent.
- moved bploc Python definition to py-breakpoint.c
The INCREF of self in bppy_get_locations is due
to the individual locations holding a reference to
it's owner. This is decremented at de-alloc time.
The reason why this needs to be here is, if the user writes
for instance;
py loc = gdb.breakpoints()[X].locations[Y]
The breakpoint owner object is immediately going
out of scope (GC'd/dealloced), and the location
object requires it to be alive for as long as it is alive.
Thanks for your review, Tom!
v4:
Fixed remaining doc issues as per request
by Eli.
v3:
Rewritten commit message, shortened + reworded,
added tests.
Patch Description
Currently, the Python API lacks the ability to
query breakpoints for their installed locations,
and subsequently, can't query any information about them, or
enable/disable individual locations.
This patch solves this by adding Python type gdb.BreakpointLocation.
The type is never instantiated by the user of the Python API directly,
but is produced by the gdb.Breakpoint.locations attribute returning
a list of gdb.BreakpointLocation.
gdb.Breakpoint.locations:
The attribute for retrieving the currently installed breakpoint
locations for gdb.Breakpoint. Matches behavior of
the "info breakpoints" command in that it only
returns the last known or currently inserted breakpoint locations.
BreakpointLocation contains 7 attributes
6 read-only attributes:
owner: location owner's Python companion object
source: file path and line number tuple: (string, long) / None
address: installed address of the location
function: function name where location was set
fullname: fullname where location was set
thread_groups: thread groups (inferiors) where location was set.
1 writeable attribute:
enabled: get/set enable/disable this location (bool)
Access/calls to these, can all throw Python exceptions (documented in
the online documentation), and that's due to the nature
of how breakpoint locations can be invalidated
"behind the scenes", either by them being removed
from the original breakpoint or changed,
like for instance when a new symbol file is loaded, at
which point all breakpoint locations are re-created by GDB.
Therefore this patch has chosen to be non-intrusive:
it's up to the Python user to re-request the locations if
they become invalid.
Also there's event handlers that handle new object files etc, if a Python
user is storing breakpoint locations in some larger state they've
built up, refreshing the locations is easy and it only comes
with runtime overhead when the Python user wants to use them.
gdb.BreakpointLocation Python type
struct "gdbpy_breakpoint_location_object" is found in python-internal.h
Its definition, layout, methods and functions
are found in the same file as gdb.Breakpoint (py-breakpoint.c)
1 change was also made to breakpoint.h/c to make it possible
to enable and disable a bp_location* specifically,
without having its LOC_NUM, as this number
also can change arbitrarily behind the scenes.
Updated docs & news file as per request.
Testsuite: tests the .source attribute and the disabling of
individual locations.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=18385
Change-Id: I302c1c50a557ad59d5d18c88ca19014731d736b0
Fix:
In gdb_mbuild.sh line 174:
continue
^------^ SC2104 (error): In functions, use return instead of continue.
Change-Id: I5ce95b01359c5cfbb1612f2f48b80bfeea66c96c
Commit 05c06f318f enabled GDB to access
memory while threads are running. It did this by accessing
/proc/PID/task/LWP/mem.
Unfortunately, this interface is not implemented for writing in older
kernels (such as RHEL6). This means that GDB is unable to insert
breakpoints on these hosts:
$ ./gdb -q gdb -ex start
Reading symbols from gdb...
Temporary breakpoint 1 at 0x40fdd5: file ../../src/gdb/gdb.c, line 28.
Starting program: /home/rhel6/fsf/linux/gdb/gdb
Warning:
Cannot insert breakpoint 1.
Cannot access memory at address 0x40fdd5
(gdb)
Before this patch, linux_proc_xfer_memory_partial (previously called
linux_proc_xfer_partial) would return TARGET_XFER_EOF if the write to
/proc/PID/mem failed. [More specifically, linux_proc_xfer_partial
would not "bother for one word," but the effect is the essentially
same.]
This status was checked by linux_nat_target::xfer_partial, which would
then fallback to using ptrace to perform the operation.
This is the specific hunk that removed the fallback:
- xfer = linux_proc_xfer_partial (object, annex, readbuf, writebuf,
- offset, len, xfered_len);
- if (xfer != TARGET_XFER_EOF)
- return xfer;
+ return linux_proc_xfer_memory_partial (readbuf, writebuf,
+ offset, len, xfered_len);
+ }
return inf_ptrace_target::xfer_partial (object, annex, readbuf, writebuf,
offset, len, xfered_len);
This patch makes linux_nat_target::xfer_partial go straight to writing
memory via ptrace if writing via /proc/pid/mem is not possible in the
running kernel, enabling GDB to insert breakpoints on these older
kernels. Note that a recent patch changed the return status from
TARGET_XFER_EOF to TARGET_XFER_E_IO.
Tested on {unix,native-gdbserver,native-extended-gdbserver}/-m{32,64}
on x86_64, s390x, aarch64, and ppc64le.
Change-Id: If1d884278e8c4ea71d8836bedd56e6a6c242a415
Probe whether /proc/pid/mem is writable, by using it to write to a GDB
variable. This will be used in the following patch to avoid falling
back to writing to inferior memory with ptrace if /proc/pid/mem _is_
writable.
Change-Id: If87eff0b46cbe5e32a583e2977a9e17d29d0ed3e
Fix some code style issues suggested by Tom Tromey and Andrew Burgess,
thank you.
(1) Put an introductory comment to explain the purpose for some functions.
(2) Modify the the attribute code to make it portable.
(3) Remove globals and pass pointers to locals.
(4) Remove "*" in the subsequent comment lines.
(5) Put two spaces before "{" and "}".
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
When running test-case gdb.opt/inline-small-func.exp with clang 12.0.1, I run
into:
...
gdb compile failed, /usr/bin/ld: inline-small-func0.o: in function `main':
inline-small-func.c:21: undefined reference to `callee'
clang-12.0: error: linker command failed with exit code 1 \
(use -v to see invocation)
UNTESTED: gdb.opt/inline-small-func.exp: failed to prepare
...
Fix this by using __attribute__((always_inline)).
Tested on x86_64-linux.
I tried building GDB on GNU/Hurd, and ran into this error:
CXX gnu-nat.o
gnu-nat.c: In member function ‘virtual int gnu_nat_target::find_memory_regions(find_memory_region_ftype, void*)’:
gnu-nat.c:2620:21: error: too few arguments to function
2620 | (*func) (last_region_address,
| ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
2621 | last_region_end - last_region_address,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2622 | last_protection & VM_PROT_READ,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2623 | last_protection & VM_PROT_WRITE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2624 | last_protection & VM_PROT_EXECUTE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2625 | 1, /* MODIFIED is unknown, pass it as true. */
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2626 | data);
| ~~~~~
gnu-nat.c:2635:13: error: too few arguments to function
2635 | (*func) (last_region_address, last_region_end - last_region_address,
| ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2636 | last_protection & VM_PROT_READ,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2637 | last_protection & VM_PROT_WRITE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2638 | last_protection & VM_PROT_EXECUTE,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2639 | 1, /* MODIFIED is unknown, pass it as true. */
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2640 | data);
| ~~~~~
make[2]: *** [Makefile:1926: gnu-nat.o] Error 1
This is because in this commit:
commit 68cffbbd44
Date: Thu Mar 31 11:42:35 2022 +0100
[AArch64] MTE corefile support
Added a new argument to find_memory_region_ftype, but did not pass it to
the function in gnu-nat.c. Fix this by passing memory_tagged as false.
As Luis pointed out, similar bugs may also appear on FreeBSD and NetBSD,
and I have reproduced them on both systems. This patch fixes them
incidentally.
Tested by rebuilding on GNU/Hurd, FreeBSD/amd64 and NetBSD/amd64.
I ran into this error when building GDB on NetBSD:
CXX netbsd-nat.o
netbsd-nat.c: In member function 'virtual bool nbsd_nat_target::info_proc(const char*, info_proc_what)':
netbsd-nat.c:314:3: error: 'gdb_argv' was not declared in this scope
gdb_argv built_argv (args);
^~~~~~~~
netbsd-nat.c:314:3: note: suggested alternative: 'gdbarch'
gdb_argv built_argv (args);
^~~~~~~~
gdbarch
netbsd-nat.c:315:7: error: 'built_argv' was not declared in this scope
if (built_argv.count () == 0)
^~~~~~~~~~
netbsd-nat.c:315:7: note: suggested alternative: 'buildargv'
if (built_argv.count () == 0)
^~~~~~~~~~
buildargv
gmake[2]: *** [Makefile:1893: netbsd-nat.o] Error 1
Fix this by adding the missing header file, as it is obvious.
Tested by rebuilding on NetBSD/amd64.
After the commit:
commit 08106042d9
Date: Thu May 19 13:20:17 2022 +0100
gdb: move the type cast into gdbarch_tdep
GDB would no longer build using g++ 4.8. The issue appears to be some
confusion caused by GDB having 'struct gdbarch_tdep', but also a
templated function called 'gdbarch_tdep'. Prior to the above commit
the gdbarch_tdep function was not templated, and this compiled just
fine. Note that the above commit compiles just fine with later
versions of g++, so this issue was clearly fixed at some point, though
I've not tried to track down exactly when.
In this commit I propose to fix the g++ 4.8 build problem by renaming
'struct gdbarch_tdep' to 'struct gdbarch_tdep_base'. This rename
better represents that the struct is only ever used as a base class,
and removes the overloading of the name, which allows GDB to build
with g++ 4.8.
I've also updated the comment on 'struct gdbarch_tdep_base' to fix a
typo, and the comment on the 'gdbarch_tdep' function, to mention that
in maintainer mode a run-time type check is performed.
The varobj_invalidate function is meant to be called when restarting a
process, and check at this point if some of the previously existing
varobj can be recreated in the context of the new process.
Two kind of varobj are subject to re-creation: global varobj (i.e.
varobj which reference a global variable), and floating varobj (i.e.
varobj which are always re-evaluated in the context of whatever is
the currently selected frame at the time of evaluation).
However, in the re-creation process, the varobj_invalidate_iter
recreates floating varobj as non-floating, due to an invalid parameter.
This patches fixes this and adds an assertion to check that if a varobj
is indeed recreated, it matches the original varobj "floating" property.
Another issue is that if at this recreation time the expression watched
by the floating varobj is not in scope, then the varobj is marked as
invalid. If later the user selects a frame where the expression becomes
valid, the varobj remains invalid and this is wrong. This patch also
make sure that floating varobj are not invalidated if they cannot be
evaluated.
The last important thing to note is that due to the previous patch, when
varobj_invalidate is executed (in the context of a new process), any
global var have already been invalidated (this has been done when the
objfile it referred to got invalidated). As a consequence,
varobj_invalidate tries to recreate vars which are already marked as
invalid. This does not entirely feels right, but I keep this behavior
for backward compatibility.
Tested on x86_64-linux
Varobj object contains references to types, variables (i.e. struct
variable) and expression. All of those can reference data on an
objfile's obstack. It is possible for this objfile to be deleted (and
the obstack to be feed), while the varobj remains valid. Later, if the
user uses the varobj, this will result in a use-after-free error. With
address sanitizer build, this leads to a plain error. For non address
sanitizer build we might see undefined behaviour, which manifest
themself as assertion failures when accessing data backed by feed
memory.
This can be observed if we create a varobj that refers to ta symbol in a
shared library, after either the objfile gets reloaded (using the `file`
command) or after the shared library is unloaded (with a call to dlclose
for example).
This patch fixes those issues by:
- Adding cleanup procedure to the free_objfile observable. When
activated this observer clears expressions referencing the objfile
being freed, and removes references to blocks belonging to this
objfile.
- Adding varobj support in the `preserve_values` (gdb.value.c). This
ensures that before the objfile is unloaded, any type owned by the
objfile referenced by the varobj is replaced by an equivalent type
not owned by the objfile. This process is done here instead of in the
free_objfile observer in order to reuse the type hash table already
used for similar purpose when replacing types of values kept in the
value history.
This patch also makes sure to keep a reference to the expression's
gdbarch and language_defn members when the varobj->root->exp is
initialized. Those structures outlive the objfile, so this is safe.
This is done because those references might be used initialize a python
context even after exp is invalidated. Another approach could have been
to initialize the python context with default gdbarch and language_defn
(i.e. nullptr) if expr is NULL, but since we might still try to display
the value which was obtained by evaluating exp when it was still valid,
keeping track of the context which was used at this time seems
reasonable.
Tested on x86_64-Linux.
Co-Authored-By: Pedro Alves <pedro@palves.net>
With the CLI testsuite's runto proc, we can pass "allow-pending" as an
option, like:
runto func allow-pending
That is currently not possible with MI's mi_runto, however. This
patch makes it possible, by adding a new "-pending" option to
mi_runto.
A pending breakpoint shows different MI attributes compared to a
breakpoint with a location, so the regexp returned by
mi_make_breakpoint isn't suitable. Thus, add a new
mi_make_breakpoint_pending proc for pending breakpoints.
Tweak mi_runto to let it take and pass down arguments.
Change-Id: I185fef00ab545a1df2ce12b4dbc3da908783a37c
GDB uses the environment variable PYTHONDONTWRITEBYTECODE to
determine whether or not to write the result of byte-compiling
python modules when the "python dont-write-bytecode" setting
is "auto". Simon noticed that GDB's implementation doesn't
follow the Python documentation.
At present, GDB only checks for the existence of this environment
variable. That is not sufficient though. Regarding
PYTHONDONTWRITEBYTECODE, this document...
https://docs.python.org/3/using/cmdline.html
...says:
If this is set to a non-empty string, Python won't try to write
.pyc files on the import of source modules.
This commit fixes GDB's handling of PYTHONDONTWRITEBYTECODE by adding
an empty string check.
This commit also corrects the set/show command documentation for
"python dont-write-bytecode". The current doc was just a copy
of that for set/show python ignore-environment.
During his review of an earlier version of this patch, Eli Zaretskii
asked that the help text that I proposed for "set/show python
dont-write-bytecode" be expanded. I've done that in addition to
clarifying the documentation of this option in the GDB manual.
After this commit:
commit 81384924cd
Date: Tue Apr 5 11:06:16 2022 +0100
gdb: have gdb_disassemble_info carry 'this' in its stream pointer
The disassemble_info::stream field will no longer be a ui_file*. That
commit failed to update one location in py-disasm.c though.
While running some tests using the Python disassembler API, I
triggered a call to gdbpy_disassembler::print_address_func, and, as I
had compiled GDB with the undefined behaviour sanitizer, GDB crashed
as the code currently (incorrectly) casts the stream field to be a
ui_file*.
In this commit I fix this error.
In order to test this case I had to tweak the existing test case a
little. I also spotted some debug printf statements in py-disasm.py,
which I have removed.
Simon pointed out that gdb_printing_disassembler::m_in_comment can be
used uninitialised by the Python disassembler API code. This issue
was spotted when GDB was built with the undefined behaviour sanitizer,
and causes the gdb.python/py-disasm.exp test to fail like this:
(gdb) PASS: gdb.python/py-disasm.exp: global_disassembler=GlobalPreInfoDisassembler: python add_global_disassembler(GlobalPreInfoDisassembler)
disassemble main
Dump of assembler code for function main:
0x0000555555555119 <+0>: push %rbp
0x000055555555511a <+1>: mov %rsp,%rbp
0x000055555555511d <+4>: nop
/home/user/src/binutils-gdb/gdb/disasm.h:144:12: runtime error: load of value 118, which is not a valid value for type 'bool'
The problem is that in disasmpy_builtin_disassemble we create a new
instance of gdbpy_disassembler, which is a sub-class of
gdb_printing_disassembler, however, the m_in_comment field is never
initialised.
This commit fixes the issue by providing a default initialisation
value for m_in_comment in disasm.h. As we only ever disassemble a
single instruction in disasmpy_builtin_disassemble then we don't need
to worry about reseting m_in_comment back to false after the single
instruction has been disassembled.
With this commit the above issue is resolved and
gdb.python/py-disasm.exp now passes.
For PR gdb/29373, I wrote an alternative implementation of struct
packed that uses a gdb_byte array for internal representation, needed
for mingw+clang. While adding that, I wrote some unit tests to make
sure both implementations behave the same. While at it, I implemented
all relational operators. This commit adds said unit tests and
relational operators. The alternative gdb_byte array implementation
will come next.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29373
Change-Id: I023315ee03622c59c397bf4affc0b68179c32374
For Arm Cortex-M33 with security extensions, there are 4 different
stack pointers (msp_s, msp_ns, psp_s, psp_ns), without security
extensions and for other Cortex-M targets, there are 2 different
stack pointers (msp and psp).
With this patch, sp will always be in sync with one of the real stack
pointers on Arm targets that contain more than one stack pointer.
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
As the register numbers for the alternative Arm SP registers are not
constant, it's not possible to use switch statement to define the
rules. In order to not have a mix, replace the few existing
switch statements with regular if-else if statements
windows_nat_target::detach has a variable 'detached' that is only set
after a call to 'error'. However, this can't happen because 'error'
throws an exception.
This patch removes the dead code.
In commit:
commit 4f46c0bc36
Date: Mon Jul 4 17:45:25 2022 +0100
opcodes: add new sub-mnemonic disassembler style
I added a new disassembler style dis_style_sub_mnemonic, but forgot to
add GDB support for this style. Fix this oversight in this commit.
This patch adds a test case to try to clear an internal python
breakpoint using the clear command.
This was suggested by Pedro during a code review of the following
commit.
commit a5c69b1e49
Date: Sun Apr 17 15:09:46 2022 +0800
gdb: fix using clear command to delete non-user breakpoints(PR cli/7161)
Tested on x86_64 openSUSE Tumbleweed.
The get_maint_bp_addr procedure will be shared by other test suite, so
move it to gdb-utils.exp.
Following Andrew's suggestion, I renamed get_maint_bp_addr to
gdb_get_bp_addr, since it would have handled normal breakpoints in
addition to the internal ones. Note that there is still room for
improvement in this procedure, which I indicated in comments nearby.
When running test-case gdb.cp/cpexprs-debug-types.exp with target board
cc-with-debug-names on a system with gcc 12.1.1 (defaulting to dwarf 5), I
run into:
...
(gdb) file cpexprs-debug-types^M
Reading symbols from cpexprs-debug-types...^M
warning: Section .debug_aranges in cpexprs-debug-types has duplicate \
debug_info_offset 0x0, ignoring .debug_aranges.^M
gdb/dwarf2/read.h:309: internal-error: set_length: \
Assertion `m_length == length' failed.^M
...
The exec contains a .debug_names section, which gdb rejects due to
.debug_names containing a list of TUs, while the exec doesn't contain a
.debug_types section (which is what you'd expect for dwarf 4).
Gdb then falls back onto the cooked index, which calls create_all_comp_units
to create all_comp_units. However, the failed index reading left some
elements in all_comp_units, so we end up with duplicates in all_comp_units,
which causes the misleading complaint and the assert.
Fix this by:
- asserting at the start of create_all_comp_units that all_comp_units is empty,
as we do in create_cus_from_index and create_cus_from_debug_names, and
- cleaning up all_comp_units when failing in dwarf2_read_debug_names.
Add a similar cleanup in dwarf2_read_gdb_index.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29381
Some Ada tests repeat their test sequence with different gnat-encodings,
typically "all" and "minimal". However, they give the same name to both
binaries, meaning the second run overwrites the binary of the first run.
This makes it difficult and confusing when trying to reproduce problems
manually with the test artifacts. Change those tests to use unique
names for each pass.
Change-Id: Iaa3c9f041241249a7d67392e785c31aa189dcc88
There are two modification points here:
1. For the debugging of csky architecture, after executing "info register",
we hope to print out GPRs, PC and the registers related to exceptions.
2. With tdesc-xml, users can view the register groups described in XML.
This commit makes use of gdb::checked_static_cast when casting the
generic gdbarch_tdep pointer to a specific sub-class type. This means
that, when compiled in developer mode, GDB will validate that the cast
is correct.
In order to use gdb::checked_static_cast the types involved must have
RTTI, which is why the gdbarch_tdep base class now has a virtual
destructor.
Assuming there are no bugs in GDB where we cast a gdbarch_tdep pointer
to the wrong type, then there should be no changes after this commit.
If any bugs do exist, then GDB will now assert (in a developer build).
I built GDB for all targets on a x86-64/GNU-Linux system, and
then (accidentally) passed GDB a RISC-V binary, and asked GDB to "run"
the binary on the native target. I got this error:
(gdb) show architecture
The target architecture is set to "auto" (currently "i386").
(gdb) file /tmp/hello.rv32.exe
Reading symbols from /tmp/hello.rv32.exe...
(gdb) show architecture
The target architecture is set to "auto" (currently "riscv:rv32").
(gdb) run
Starting program: /tmp/hello.rv32.exe
../../src/gdb/i387-tdep.c:596: internal-error: i387_supply_fxsave: Assertion `tdep->st0_regnum >= I386_ST0_REGNUM' failed.
What's going on here is this; initially the architecture is i386, this
is based on the default architecture, which is set based on the native
target. After loading the RISC-V executable the architecture of the
current inferior is updated based on the architecture of the
executable.
When we "run", GDB does a fork & exec, with the inferior being
controlled through ptrace. GDB sees an initial stop from the inferior
as soon as the inferior comes to life. In response to this stop GDB
ends up calling save_stop_reason (linux-nat.c), which ends up trying
to read register from the inferior, to do this we end up calling
target_ops::fetch_registers, which, for the x86-64 native target,
calls amd64_linux_nat_target::fetch_registers.
After this I eventually end up in i387_supply_fxsave, different x86
based targets will end in different functions to fetch registers, but
it doesn't really matter which function we end up in, the problem is
this line, which is repeated in many places:
i386_gdbarch_tdep *tdep = (i386_gdbarch_tdep *) gdbarch_tdep (arch);
The problem here is that the ARCH in this line comes from the current
inferior, which, as we discussed above, will be a RISC-V gdbarch, the
tdep field will actually be of type riscv_gdbarch_tdep, not
i386_gdbarch_tdep. After this cast we are relying on undefined
behaviour, in my case I happen to trigger an assert, but this might
not always be the case.
The thing I tried that exposed this problem was of course, trying to
start an executable of the wrong architecture on a native target. I
don't think that the correct solution for this problem is to detect,
at the point of cast, that the gdbarch_tdep object is of the wrong
type, but, I did wonder, is there a way that we could protect
ourselves from incorrectly casting the gdbarch_tdep object?
I think that there is something we can do here, and this commit is the
first step in that direction, though no actual check is added by this
commit.
This commit can be split into two parts:
(1) In gdbarch.h and arch-utils.c. In these files I have modified
gdbarch_tdep (the function) so that it now takes a template argument,
like this:
template<typename TDepType>
static inline TDepType *
gdbarch_tdep (struct gdbarch *gdbarch)
{
struct gdbarch_tdep *tdep = gdbarch_tdep_1 (gdbarch);
return static_cast<TDepType *> (tdep);
}
After this change we are no better protected, but the cast is now
done within the gdbarch_tdep function rather than at the call sites,
this leads to the second, much larger change in this commit,
(2) Everywhere gdbarch_tdep is called, we make changes like this:
- i386_gdbarch_tdep *tdep = (i386_gdbarch_tdep *) gdbarch_tdep (arch);
+ i386_gdbarch_tdep *tdep = gdbarch_tdep<i386_gdbarch_tdep> (arch);
There should be no functional change after this commit.
In the next commit I will build on this change to add an assertion in
gdbarch_tdep that checks we are casting to the correct type.
The three targets that implement gdbarch_adjust_breakpoint_address are
arm, frv, and mips. In each of these targets the adjust breakpoint
address function does some combination of reading the symbol table, or
reading memory at the location the breakpoint could be placed.
The problem is that performing these actions requires that the current
inferior and program space be the one in which the breakpoint will be
placed, and this is not currently always the case.
Consider a GDB session with multiple inferiors. One inferior might be
a native target while another could be a remote target of a completely
different architecture. Alternatively, if we consider ARM and
AArch64, one native inferior might be AArch64, while a second native
inferior could be ARM.
In these cases it is possible, and valid, for a user to have one
inferior selected, and place a breakpoint in the other inferior by
placing a breakpoint on a particular symbol.
If this happens, then currently, when
gdbarch_adjust_breakpoint_address is called, the wrong inferior (and
program space) will be selected, and memory reads, and symbol look
ups, will not return the expected results, this could lead to
breakpoints being placed in the wrong location.
There are currently two places where gdbarch_adjust_breakpoint_address
is called:
1. In infrun.c, in the function handle_step_into_function. In this
case, I believe that the correct inferior and program space will
already be selected as this is called as part of the stop event
handling, so I don't think we need to worry about this case, and
2. In breakpoint.c, in the function adjust_breakpoint_address, which
is itself called from code_breakpoint::add_location and
watch_command_1.
The watch_command_1 case I don't think we need to worry about, this
is for when a local watch expression is created, which can only be
in the currently selected inferior, so this case should be fine.
The code_breakpoint::add_location case is the one that needs fixing,
this is what allows a breakpoint to be created between inferiors.
To fix the code_breakpoint::add_location case, I propose that we pass
the "correct" program_space (i.e. the program space in which the
breakpoint will be created) to the adjust_breakpoint_address function.
Then in adjust_breakpoint_address we can make use of
switch_to_program_space_and_thread to switch program_space and
inferior before calling gdbarch_adjust_breakpoint_address.
I discovered this issue while working on a later patch in this
series. This later patch will detect when we cast the result of
gdbarch_tdep to the wrong type.
With this later patch in place I ran gdb.multi/multi-arch.exp on an
AArch64 target. In this situation, two inferiors are created, an
AArch64 inferior, and an ARM inferior. The test selected the AArch64
inferior and tries to create a breakpoint in the ARM inferior.
As a result of this we end up in arm_adjust_breakpoint_address, which
calls arm_pc_is_thumb. Before this commit the AArch64 inferior would
be current. As a result, all of the checks in arm_pc_is_thumb would
fail (they rely on reading symbols from the current program space),
and so, at the end of arm_pc_is_thumb we would call
arm_frame_is_thumb. However, remember, at this point the current
inferior is the AArch64 inferior, so the current frame is an AArch64
frame.
In arm_frame_is_thumb we call arm_psr_thumb_bit, which calls
gdbarch_tdep and casts the result to arm_gdbarch_tdep. This is wrong,
the tdep field is of type aarch64_gdbarch_tdep. After this we have
undefined behaviour.
With this patch in place, we will have switched to a thread in the ARM
program space before calling arm_adjust_breakpoint_address. As a
result, we now succeed in looking up the required symbols in
arm_pc_is_thumb, and so we never call arm_frame_is_thumb.
However, in the worst case scenario, if we did end up calling
arm_frame_is_thumb, as the current inferior should now be the ARM
inferior, the current frame should be an ARM frame, so we still should
not hit undefined behaviour.
I have added an assert to arm_frame_is_thumb.
This commit is similar to the previous commit, but in this case GDB is
actually relying on undefined behaviour.
Consider building GDB for all targets on x86-64/GNU-Linux, then doing
this:
(gdb) show mips mask-address
Zeroing of upper 32 bits of 64-bit addresses is auto.
The 32 bit address mask is set automatically. Currently disabled
(gdb)
The 'show mips mask-address' command ends up in show_mask_address in
mips-tdep.c, and this function does this:
mips_gdbarch_tdep *tdep
= (mips_gdbarch_tdep *) gdbarch_tdep (target_gdbarch ());
Later we might pass TDEP to mips_mask_address_p. However, in my
example above, on an x86-64 native target, the current target
architecture will be an x86-64 gdbarch, and the tdep field within the
gdbarch will be of type i386_gdbarch_tdep, not of type
mips_gdbarch_tdep, as a result the cast above was incorrect, and TDEP
is not pointing at what it thinks it is.
I also think the current output is a little confusing, we appear to
have two lines that show the same information, but using different
words.
The first line comes from calling deprecated_show_value_hack, while
the second line is printed directly from show_mask_address. However,
both of these lines are printing the same mask_address_var value. I
don't think the two lines actually adds any value here.
Finally, none of the text in this function is passed through the
internationalisation mechanism.
It would be nice to remove another use of deprecated_show_value_hack
if possible, so this commit does a complete rewrite of
show_mask_address.
After this commit the output of the above example command, still on my
x86-64 native target is:
(gdb) show mips mask-address
Zeroing of upper 32 bits of 64-bit addresses is "auto" (current architecture is not MIPS).
The 'current architecture is not MIPS' text is only displayed when the
current architecture is not MIPS. If the architecture is mips then we
get the more commonly seen 'currently "on"' or 'currently "off"', like
this:
(gdb) set architecture mips
The target architecture is set to "mips".
(gdb) show mips mask-address
Zeroing of upper 32 bits of 64-bit addresses is "auto" (currently "off").
(gdb)
All the text is passed through the internationalisation mechanism, and
we only call gdbarch_tdep when we know the gdbarch architecture is
bfd_arch_mips.
This is a small refactor to resolve an issue before it becomes a
problem in a later commit.
Move the fetching of an arm_gdbarch_tdep into a more inner scope
within two functions in arm-tdep.c.
The problem with the current code is that the functions in question
are used as the callbacks for two set/show parameters. These set/show
parameters are available no matter the current architecture, but are
really about controlling an ARM architecture specific setting. And
so, if I build GDB for all targets on an x86-64/GNU-Linux system, I
can still do this:
(gdb) show arm fpu
(gdb) show arm abi
After these calls we end up in show_fp_model and arm_show_abi
respectively, where we unconditionally do this:
arm_gdbarch_tdep *tdep
= (arm_gdbarch_tdep *) gdbarch_tdep (target_gdbarch ());
However, the gdbarch_tdep() result will only be a arm_gdbarch_tdep if
the current architecture is ARM, otherwise the result will actually be
of some other type.
This isn't actually a problem, as in both cases the use of tdep is
guarded by a later check that the gdbarch architecture is
bfd_arch_arm.
This commit just moves the call to gdbarch_tdep() after the
architecture check.
In a later commit gdbarch_tdep() will be able to spot when we are
casting the result to the wrong type, and this function will trigger
assertion failures if things are not fixed.
There should be not user visible changes after this commit.
All usages of this helper are really made to check if the register is
one of the alternative SP registers (MSP/MSP_S/MSP_NS/PSP/PSP_S/PSP_NS)
with the ARM_SP_REGNUM case being handled separately.
Signed-off-by: Luis Machado <luis.machado@arm.com>
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
Signed-off-by: Yvan Roux <yvan.roux@foss.st.com>
With python 3.11 I noticed:
...
$ gdb -q -batch -ex "maint selftest python"
Running selftest python.
Self test failed: self-test failed at gdb/python/python.c:2246
Ran 1 unit tests, 1 failed
...
In more detail:
...
(gdb) p output
$5 = "Traceback (most recent call last):\n File \"<string>\", line 0, \
in <module>\nKeyboardInterrupt\n"
(gdb) p ref_output
$6 = "Traceback (most recent call last):\n File \"<string>\", line 1, \
in <module>\nKeyboardInterrupt\n"
...
Fix this by also allowing line number 0.
Tested on x86_64-linux.
This should hopefully fix buildbot builder gdb-rawhide-x86_64.
I noticed this code in dw2_debug_names_iterator::next:
...
case DW_IDX_compile_unit:
/* Don't crash on bad data. */
if (ull >= per_bfd->all_comp_units.size ())
{
complaint (_(".debug_names entry has bad CU index %s"
" [in module %s]"),
pulongest (ull),
objfile_name (objfile));
continue;
}
per_cu = per_bfd->get_cu (ull);
break;
...
This code used to DTRT, before we started keeping both CUs and TUs in
all_comp_units.
Fix by using "per_bfd->all_comp_units.size () - per_bfd->tu_stats.nr_tus"
instead.
It's hard to produce a test-case for this, but let's try at least to trigger
the complaint somehow. We start out by creating an exec with .debug_types and
.debug_names:
...
$ gcc -g ~/hello.c -fdebug-types-section
$ gdb-add-index -dwarf-5 a.out
...
and verify that we don't see any complaints:
...
$ gdb -q -batch -iex "set complaints 100" ./a.out
...
We look at the CU and TU table using readelf -w and conclude that we have
nr_cus == 6 and nr_tus == 1.
Now override ull in dw2_debug_names_iterator::next for the DW_IDX_compile_unit
case to 6, and we have:
...
$ gdb -q -batch -iex "set complaints 100" ./a.out
During symbol reading: .debug_names entry has bad CU index 6 [in module a.out]
...
After this, it still crashes because this code in
dw2_debug_names_iterator::next:
...
/* Skip if already read in. */
if (m_per_objfile->symtab_set_p (per_cu))
goto again;
...
is called with per_cu == nullptr.
Fix this by skipping the entry if per_cu == nullptr.
Now revert the fix and observe that the complaint disappears, so we've
confirmed that the fix is required.
A somewhat similar issue for .gdb_index in dw2_symtab_iter_next has been filed
as PR29367.
Tested on x86_64-linux, with native and target board cc-with-debug-names.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29336
Python 3.11 deprecates PySys_SetPath and Py_SetProgramName. The
PyConfig API replaces these and other functions. This commit uses the
PyConfig API to provide equivalent functionality while also preserving
support for older versions of Python, i.e. those before Python 3.8.
A beta version of Python 3.11 is available in Fedora Rawhide. Both
Fedora 35 and Fedora 36 use Python 3.10, while Fedora 34 still used
Python 3.9. I've tested these changes on Fedora 34, Fedora 36, and
rawhide, though complete testing was not possible on rawhide due to
a kernel bug. That being the case, I decided to enable the newer
PyConfig API by testing PY_VERSION_HEX against 0x030a0000. This
corresponds to Python 3.10.
We could try to use the PyConfig API for Python versions as early as 3.8,
but I'm reluctant to do this as there may have been PyConfig related
bugs in earlier versions which have since been fixed. Recent linux
distributions should have support for Python 3.10. This should be
more than adequate for testing the new Python initialization code in
GDB.
Information about the PyConfig API as well as the motivation behind
deprecating the old interface can be found at these links:
https://github.com/python/cpython/issues/88279https://peps.python.org/pep-0587/https://docs.python.org/3.11/c-api/init_config.html
The v2 commit also addresses several problems that Simon found in
the v1 version.
In v1, I had used Py_DontWriteBytecodeFlag in the new initialization
code, but Simon pointed out that this global configuration variable
will be deprecated in Python 3.12. This version of the patch no longer
uses Py_DontWriteBytecodeFlag in the new initialization code.
Additionally, both Py_DontWriteBytecodeFlag and Py_IgnoreEnvironmentFlag
will no longer be used when building GDB against Python 3.10 or higher.
While it's true that both of these global configuration variables are
deprecated in Python 3.12, it makes sense to disable their use for
gdb builds against 3.10 and higher since those are the versions for
which the PyConfig API is now being used by GDB. (The PyConfig API
includes different mechanisms for making the same settings afforded
by use of the soon-to-be deprecated global configuration variables.)
Simon also noted that PyConfig_Clear() would not have be called for
one of the failure paths. I've fixed that problem and also made the
rest of the "bail out" code more direct. In particular,
PyConfig_Clear() will always be called, both for success and failure.
The v3 patch addresses some rebase conflicts related to module
initialization . Commit 3acd9a692d ("Make 'import gdb.events' work")
uses PyImport_ExtendInittab instead of PyImport_AppendInittab. That
commit also initializes a struct for each module to import. Both the
initialization and the call to were moved ahead of the ifdefs to avoid
having to replicate (at least some of) the code three times in various
portions of the ifdefs.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28668
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29287
Building GDB currently fails to build with libc++, because libc++ is
stricter about which headers "leak" entities they're not guaranteed
to support. The following headers have been added:
* `<iterator>`, to support `std::back_inserter`
* `<utility>`, to support `std::move` and `std::swap`
* `<vector>`, to support `std::vector`
Change-Id: Iaeb15057c5fbb43217df77ce34d4e54446dbcf3d