96d6e190e9
There are some known limitations for now,
* Do not shrink the length of the uleb128 value, even if the value is reduced
after relaxations. Also reports error if the length grows up.
* The R_RISCV_SET_ULEB128 needs to be paired with and be placed before the
R_RISCV_SUB_ULEB128.
bfd/
* bfd-in2.h: Regenerated.
* elfnn-riscv.c (perform_relocation): Perform R_RISCV_SUB_ULEB128 and
R_RISCV_SET_ULEB128 relocations. Do not shrink the length of the
uleb128 value, and report error if the length grows up. Called the
generic functions, _bfd_read_unsigned_leb128 and _bfd_write_unsigned_leb128,
to encode the uleb128 into the section contents.
(riscv_elf_relocate_section): Make sure that the R_RISCV_SET_ULEB128
must be paired with and be placed before the R_RISCV_SUB_ULEB128.
* elfxx-riscv.c (howto_table): Added R_RISCV_SUB_ULEB128 and
R_RISCV_SET_ULEB128.
(riscv_reloc_map): Likewise.
(riscv_elf_ignore_reloc): New function.
* libbfd.h: Regenerated.
* reloc.c (BFD_RELOC_RISCV_SET_ULEB128, BFD_RELOC_RISCV_SUB_ULEB128):
New relocations to support .uleb128 subtraction.
gas/
* config/tc-riscv.c (md_apply_fix): Added BFD_RELOC_RISCV_SET_ULEB128
and BFD_RELOC_RISCV_SUB_ULEB128.
(s_riscv_leb128): Updated to allow uleb128 subtraction.
(riscv_insert_uleb128_fixes): New function, scan uleb128 subtraction
expressions and insert fixups for them.
(riscv_md_finish): Called riscv_insert_uleb128_fixes for all sections.
include/
* elf/riscv.h ((R_RISCV_SET_ULEB128, (R_RISCV_SUB_ULEB128): Defined.
ld/
* testsuite/ld-riscv-elf/ld-riscv-elf.exp: Updated.
* testsuite/ld-riscv-elf/uleb128*: New testcase for uleb128 subtraction.
binutils/
* testsuite/binutils-all/nm.exp: Updated since RISCV supports .uleb128.
* Extract all private_data initializations into riscv_init_disasm_info, which
called from print_insn_riscv rather than riscv_disassemble_insn.
* The disassemble_free_target seems like the right place to release all target
private_data, also including the internal data structures, like riscv_subsets.
Therefore, add a new function, disassemble_free_riscv, to release them for safe.
opcodes/
* disassemble.c (disassemble_free_target): Called disassemble_free_riscv
for riscv to release private_data and internal data structures.
* disassemble.h: Added extern disassemble_free_riscv.
* riscv-dis.c (riscv_init_disasm_info): New function, used to init
riscv_private_data.
(riscv_disassemble_insn): Moved riscv_private_data initializations
into riscv_init_disasm_info.
(print_insn_riscv): Called riscv_init_disasm_info to init
riscv_private_data once time.
(disassemble_free_riscv): New function, used to free the internal data
structures, like riscv_subsets.
While a442cac508 ("ix86: wrap constants") helped address a number of
inconsistencies between BFD64 and !BFD64 builds, it has also resulted in
certain bogus uses of constants to no longer be warned about. Leverage
the md_optimize_expr() hook to adjust when to actually truncate
expressions to 32 bits - any involvement of binary expressions (which
would be evaluated in 32 bits only when !BFD64) signals the need for
doing so. Plain constants (or ones merely subject to unary operators)
should remain un-truncated - they would be handled as bignums when
!BFD64, and hence are okay to permit.
To compensate
- slightly extend optimize_imm() (to be honest I never understood why
the code being added - or something similar - wasn't there in the
first place),
- adjust expectations of the disp-imm-32 testcase (there are now
warnings, as there should be for any code which won't build [warning-
free] when !BFD64, and Disp8/Imm8 are no longer used in the warned
about cases).
Give backends a chance to see these, just as they can see binary ones.
Most of those which use this hook already cope with NULL being passed
for the left operand (typically because of checking the operator first).
Adjust the two which don't.
Take the opportunity and also document the hook.
Unary '~' doesn't really produce an unsigned result. Neither does
subtraction (unless taking operand values into consideration). And an
abstract operator applied to two operands which aren't both unsigned
can't be assumed to yield an unsigned result; exceptions are
- shifts, where only signedness of the left hand operand matters,
- comparisons, which - unlike unary '!' - produce signed results (they
deliver 0 or ~0, as opposed to '!', which yields 0 or 1),
- logical operators (yielding 0 or 1 and hence treated like unary '!').
While doing this (specifically while extending the all/quad testcase),
update .quad and .8byte documentation: With 64-bit architectures now
being common, it is highly inappropriate to state that these directives
unconditionally require bignums.
In a442cac508 ("ix86: wrap constants") I made the truncation condition
too relaxed: Any indication of a mode that's possible with BFD64 only
should avoid the truncation. Therefore, like in the other two cases of
calls to extend_to_32bit_address(), also check whether we're generating
a 64-bit object.
Eli pointed out that @sc only produces small caps for lower case
letters in its argument, so it's weird to write it using upper-case
letters. This patch fixes the instances I found.
Approved-By: Eli Zaretskii <eliz@gnu.org>
gdb.fortran/lbound-ubound.exp reads the expected lbound and ubound
values by reading some output from the inferior. This is racy when
running on boards where the inferior I/O is on a separate TTY than
GDB's, such as native-gdbserver.
I sometimes see this behavior:
(gdb) continue
Continuing.
Breakpoint 2, do_test (lb=..., ub=...) at /home/jenkins/workspace/binutils-gdb_master_linuxbuild/platform/jammy-amd64/target_board/nati
ve-gdbserver/src/binutils-gdb/gdb/testsuite/gdb.fortran/lbound-ubound.F90:45
45 print *, "" ! Test Breakpoint
(gdb) Remote debugging from host ::1, port 37496
Expected GDB Output:
LBOUND = (-8, -10)
UBOUND = (-1, -2)
APB: Run a test here
APB: Expected lbound '(-8, -10)'
APB: Expected ubound ''
What happened is that expect read the output from GDB before the output
from the inferior, triggering this gdb_test_multiple clause:
-re "$gdb_prompt $" {
set found_prompt true
if {$found_dealloc_breakpoint
|| ($expected_lbound != "" && $expected_ubound != "")} {
# We're done.
} else {
exp_continue
}
}
So it set found_prompt, but the gdb_test_multiple kept going because
found_dealloc_breakpoint is false (this is the flag indicating that the
test is finished) and we still don't have expected_lbound and
expected_ubound. Then, expect reads in the inferior I/O, triggering
this clause:
-re ".*LBOUND = (\[^\r\n\]+)\r\n" {
set expected_lbound $expect_out(1,string)
if {!$found_prompt} {
exp_continue
}
}
This sets expected_lbound, but since found_prompt is true, we don't do
exp_continue, and exit the gdb_test_multiple, without having an
expected_ubound.
Change the test to read the values from the lb and ub function
parameters instead. As far as I understand, this still exercises what
we want to test. These variables contain the return values of the
lbound and ubound functions as computed by the program. We'll use them
to check the return values of the lbound and ubound functions as
computed by GDB.
Change-Id: I3c4d3d17d9291870a758a42301d15a007821ebb5
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30414
In the current code, when execute the following test on LoongArch:
$ make check-gdb TESTS="gdb.base/gnu-ifunc.exp"
=== gdb Summary ===
# of expected passes 111
# of unexpected failures 62
According to IFUNC's working process [1]. first time the IFUNC function
is called, the dynamic linker will not simply fill the .got.plt entry
with the actual address of IFUNC symbol, it will call the IFUNC resolver
function and take the return address, uses it as the sym-bound address
and puts it in the .got.plt entry. Initial address in .got.plt entry is
not a real function addresss. Depending on the compiler implementation,
some different addresses will be filled in. Most architectures will use
a .plt entry address to fill in the corresponding .got.plt entry.
In gdb, elf_gnu_ifunc_resolve_addr() will be called to return a real
IFUNC function addresss. First check to see if the real address for
the IFUNC symbol has been resolved by the following function:
elf_gnu_ifunc_resolve_name (const char *name, CORE_ADDR *addr_p)
{
if (elf_gnu_ifunc_resolve_by_cache (name, addr_p))
return true;
if (elf_gnu_ifunc_resolve_by_got (name, addr_p))
return true;
return false;
}
in elf_gnu_ifunc_resolve_by_got(), it gets the contents of the
.got.plt entry and determines if the contents is the correct address
by calling elf_gnu_ifunc_record_cache(). Based on the IFUNC working
principle analysis above, the address filled in the .got.plt entry is
not the actual target function address initially, it would be a .plt
entry address corresponding symbol like *@plt. In this case, gdb just
go back to execute the resolver function and puts the return address
in the .got.plt entry. After that, gdb can get a real ifun address via
.got.plt entry.
On LoongArch, initially, each address filled in the .got.plt entries
is the first .plt entry address. Some architectures such as LoongArch
define the symbol _PROCEDURE_LINKAGE_TABLE_ at the start of the .plt
section. This symbol is the first plt entry, so gdb needs to check
this symbol in elf_gnu_ifunc_record_cache().
On LoongArch .got.plt and .plt section as follow:
$objdump -D gdb/testsuite/outputs/gdb.base/gnu-ifunc/gnu-ifunc-0-0-0
...
0000000120010008 <.got.plt>:
120010008: ffffffff 0xffffffff
12001000c: ffffffff 0xffffffff
...
120010018: 20004000 ll.w $zero, $zero, 64(0x40)
12001001c: 00000001 0x00000001
120010020: 20004000 ll.w $zero, $zero, 64(0x40)
120010024: 00000001 0x00000001
120010028: 20004000 ll.w $zero, $zero, 64(0x40)
12001002c: 00000001 0x00000001
120010030: 20004000 ll.w $zero, $zero, 64(0x40)
120010034: 00000001 0x00000001
...
Disassembly of section .plt:
0000000120004000 <_PROCEDURE_LINKAGE_TABLE_>:
120004000: 1c00018e pcaddu12i $t2, 12(0xc)
120004004: 0011bdad sub.d $t1, $t1, $t3
120004008: 28c021cf ld.d $t3, $t2, 8(0x8)
12000400c: 02ff51ad addi.d $t1, $t1, -44(0xfd4)
120004010: 02c021cc addi.d $t0, $t2, 8(0x8)
120004014: 004505ad srli.d $t1, $t1, 0x1
120004018: 28c0218c ld.d $t0, $t0, 8(0x8)
12000401c: 4c0001e0 jirl $zero, $t3, 0
0000000120004020 <__libc_start_main@plt>:
120004020: 1c00018f pcaddu12i $t3, 12(0xc)
120004024: 28ffe1ef ld.d $t3, $t3, -8(0xff8)
120004028: 4c0001ed jirl $t1, $t3, 0
12000402c: 03400000 andi $zero, $zero, 0x0
0000000120004030 <abort@plt>:
120004030: 1c00018f pcaddu12i $t3, 12(0xc)
120004034: 28ffc1ef ld.d $t3, $t3, -16(0xff0)
120004038: 4c0001ed jirl $t1, $t3, 0
12000403c: 03400000 andi $zero, $zero, 0x0
0000000120004040 <gnu_ifunc@plt>:
120004040: 1c00018f pcaddu12i $t3, 12(0xc)
120004044: 28ffa1ef ld.d $t3, $t3, -24(0xfe8)
120004048: 4c0001ed jirl $t1, $t3, 0
12000404c: 03400000 andi $zero, $zero, 0x0
...
With this patch:
$make check-gdb TESTS="gdb.base/gnu-ifunc.exp"
=== gdb Summary ===
#of expected passes 173
[1] https://sourceware.org/glibc/wiki/GNU_IFUNC
Signed-off-by: Hui Li <lihui@loongson.cn>
Another thing, section target_index is renumbered in
coff_compute_section_file_positions and _bfd_xcoff_bfd_final_link. I
don't know that there is currently any way that the output bfd
section_by_target_index could be populated before this point but
clear them out so no one need worry about it.
* coffcode.h (coff_compute_section_file_positions): Clear
section_by_target_index hash table when changing target_index.
(_bfd_xcoff_bfd_final_link): Likewise.
I noticed a trailing whitespace and some indentation errors in lib/tuiterm.exp.
Fix these.
Tested by re-running the TUI test-cases (gdb.tui/*.exp and gdb.python/tui*.exp)
on x86_64-linux.
sframe_get_funcdesc_with_addr API is currently used internally by the
sframe_find_fre ().
In this test, we create three dummy SFrame FDEs with 4 FREs each. Then,
we use few negative tests to lookup FREs with PCs not in the range of
PCs covered by the FDEs, ensuring graceful return from
sframe_get_funcdesc_with_addr in all cases. Some positive tests are
also added that exercise further scenarios as well.
libsframe/
* Makefile.in: Regenerated.
* testsuite/libsframe.find/find.exp: Include new test.
* testsuite/libsframe.find/findfunc-1.c: New Test.
* testsuite/libsframe.find/local.mk: Include new test.
libsframe provides an API to find the FRE associated with a given PC in
the program. This patch adds a direct test of this API.
In this test, we create two dummy SFrame FDEs with 4 FREs each. Then we
test that sframe_find_fre () works for the first, second, third and the
last FRE from one of the FDEs. Such a test ensures better regression
testing for the sframe_find_fre () function which is going to be the
bread and butter of an SFrame based stack tracer.
libsframe/
* Makefile.in: Regenerated.
* testsuite/libsframe.find/find.exp: New test.
* testsuite/libsframe.find/findfre-1.c: New test.
* testsuite/libsframe.find/local.mk: Build new test.
* testsuite/local.mk: Include libsframe.find.
Commit 0e759f232b regressed these tests:
rs6000-aix7.2 +FAIL: Garbage collection test 1 (32-bit)
rs6000-aix7.2 +FAIL: Garbage collection test 1 (64-bit)
rs6000-aix7.2 +FAIL: Glink test 1 (32-bit)
rs6000-aix7.2 +FAIL: Glink test 1 (64-bit)
Investigation showed segfaults in coff_section_from_bfd_index called
by xcoff_write_global_symbol due to the hash table pointer being
NULL. Well, yes, the hash table isn't initialised for the output bfd.
mkobject_hook is the wrong place to do that.
* coffcode.h: Revert 0e759f232b changes.
* peicode.h: Likewise.
* coff-x86_64.c (htab_hash_section_index, htab_eq_section_index):
Moved here from coffcode.h.
(coff_amd64_rtype_to_howto): Create section_by_index htab.
* coffgen.c (htab_hash_section_target_index),
(htab_eq_section_target_index): Moved here from coffcode.h.
(coff_section_from_bfd_index): Create section_by_target_index
htab. Stash newly created sections in htab.
Well, it doesn't work on x86 or ppc, which both have # starting
comments anywhere on a line. I think it is therefore only useful on
sparc.
PR 11601
* config/obj-elf.c (obj_elf_section_word): Only compile for sparc.
(obj_elf_section): Only support solaris .section directive on
sparc.
* doc/as.texi (Section): Mention that solaris .section
directive is only supported for sparc.
"&str" is an important type in Rust -- it's the type of string
literals. However, the compiler puts it in the DWARF in a funny way.
The slice itself is present and named "&str". However, the Rust
parser doesn't look for types with names like this, but instead tries
to construct them from components. In this case it tries to make a
pointer-to-"str" -- but "str" isn't always available, and in any case
that wouldn't yield the best result.
This patch adds a special case for &str.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=22251
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Found when attempting to build binutils on sparc sunos-5.8 where
sys/byteorder.h defines _BIG_ENDIAN but not any of the BYTE_ORDER
variants. This patch adds the extra tests to cope with the old
machine, and tidies the header a little.
PR 29961
plugin-api.h: When handling non-gcc or gcc < 4.6.0 include
necessary header files before testing macros. Make more use
of #elif. Test _LITTLE_ENDIAN and _BIG_ENDIAN in final tests.
Trying to build binutils with an older gcc currently fails. Working
around these gcc bugs is not onerous so let's fix them.
bfd/
* elf32-csky.c (csky_elf_size_dynamic_sections): Don't type-pun
pointer.
* elf32-rl78.c (rl78_compute_complex_reloc): Rename "stat"
variable to "status".
gas/
* compress-debug.c (compress_finish): Supply all fields in
ZSTD_inBuffer initialisation.
include/
* xtensa-dynconfig.h (xtensa_isa_internal): Delete unnecessary
forward declaration.
opcodes/
* loongarch-opc.c: Supply all fields of zero struct initialisation
in various opcode tables.
Static function name is not available in stripped libraries.
In this case, gprofng maps PC to a fake function like <static>@0xPC (<libname>).
Sometimes gprofng creates two functions instead of one.
Also FUNC_FLAG_SIMULATED is needed for these fake functions.
gprofng/ChangeLog
2023-05-11 Vladimir Mezentsev <vladimir.mezentsev@oracle.com>
* src/LoadObject.cc (LoadObject::find_function): Set FUNC_FLAG_SIMULATED.
Include a new function in the right place.
Currently, for a source file containing only 5 lines, we also show line
numbers 6 and 7 if they're in scope of the source window:
...
0 +-compact-source.c----------------+
1 |___3_{ |
2 |___4_ return 0; |
3 |___5_} |
4 |___6_ |
5 |___7_ |
6 +---------------------------------+
...
Fix this by not showing line numbers not in a source file, such that we have instead:
...
0 +-compact-source.c----------------+
1 |___3_{ |
2 |___4_ return 0; |
3 |___5_} |
4 | |
5 | |
6 +---------------------------------+
...
Tested on x86_64-linux.
Suggested-By: Simon Marchi <simon.marchi@efficios.com>
Approved-By: Tom Tromey <tom@tromey.com>
* libcoff-in.h (struct coff_tdata): Add section_by_index and section_by_target_index hash tables.
* libcoff.h: Regenerate.
* coffcode.h (htab_hash_section_index): New function. (htab_eq_section_index): New function. (htab_hash_section_target_index): New function. (htab_eq_section_target_index): New function. (coff_mkobject_hool): Create the hash tables.
* peicode.h: Add the same new functions. (pe_mkobject_hook): Create the hash tables.
* coff-x86_64.c (coff_amd64_rtype_to_howto): Use the new tables to speed up lookups.
* coffgen.c (coff_section_from_bfd_index): Likewise. (_bfd_coff_close_and_cleanup): Delete the hash tables.
Rewrite gdb_supported_languages as a caching proc that actually
queries GDB for the list of supported languages, rather than just
containing a hard-coded list of languages.
There's only one test that uses this proc right now,
gdb.python/py-function.exp, and that still passes after this change,
with no changes in the test names.
After this commit:
commit a68f7e9844
Date: Tue May 9 10:28:42 2023 +0100
gdb/testsuite: extend special '^' handling to gdb_test_multiple
buildbot notified me of a regression on s390 in the test:
gdb.base/break-main-file-remove-fail.exp
the failure looks like this:
print /d ((int (*) (void *, size_t)) munmap) (16781312, 4096)
warning: Error removing breakpoint 0
$2 = 0
(gdb) FAIL: gdb.base/break-main-file-remove-fail.exp: cmdline: get integer valueof "((int (*) (void *, size_t)) munmap) (16781312, 4096)"
On the mailing list it has been reported that this failure also
impacts arm, aarch64, and possibly ppc/ppc64 too.
The above commit changed get_integer_valueof so that no output is
expected between the command and the '$2 = 0' line. In this case the
'warning: Error removing breakpoint 0' output is causing the
get_integer_valueof call to fail.
The reason for this warning is that this test deliberately calls
munmap on a page of the inferior's code. The test is checking that
GDB can handle the situation where a s/w breakpoint can't be
removed (due to the page no longer being readable/writable).
The test that is supposed to trigger the warning is later in the test
script when we delete a breakpoint.
So why do some targets trigger the warning earlier during the inferior
call?
The impacted targets use AT_ENTRY_POINT as their strategy for handling
inferior calls, that is, the trampoline that calls the inferior
function is placed at the program's entry point, e.g. often the _start
label.
If this location happens to be on the same page as the page that the
test script unmaps then, when the inferior function call returns, GDB
will not be able to remove the temporary breakpoint that is inserted
to catch the inferior function call returning! As a result we end up
seeing the warning earlier than expected.
I did wonder if this means I should relax the pattern in
get_integer_valueof - just accept that there might be additional
output from GDB which we should ignore.
However, I don't think this the right way to go. With the change in
a68f7e9844 we are now stricter for GDB emitting additional,
unexpected, output, and I think that is a good thing.
So, I think, in this case, in order to handle the possible extra
output, we should implement something like get_integer_valueof
directly in the gdb.base/break-main-file-remove-fail.exp test script.
This local version will handle the possible warning output.
After this the test should pass again on the impacted targets.
This commit extends the Python Disassembler API to allow for styling
of the instructions.
Before this commit the Python Disassembler API allowed the user to do
two things:
- They could intercept instruction disassembly requests and return a
string of their choosing, this string then became the disassembled
instruction, or
- They could call builtin_disassemble, which would call back into
libopcode to perform the disassembly. As libopcode printed the
instruction GDB would collect these print requests and build a
string. This string was then returned from the builtin_disassemble
call, and the user could modify or extend this string as needed.
Neither of these approaches allowed for, or preserved, disassembler
styling, which is now available within libopcodes for many of the more
popular architectures GDB supports.
This commit aims to fill this gap. After this commit a user will be
able to do the following things:
- Implement a custom instruction disassembler entirely in Python
without calling back into libopcodes, the custom disassembler will
be able to return styling information such that GDB will display
the instruction fully styled. All of GDB's existing style
settings will affect how instructions coming from the Python
disassembler are displayed in the expected manner.
- Call builtin_disassemble and receive a result that represents how
libopcode would like the instruction styled. The user can then
adjust or extend the disassembled instruction before returning the
result to GDB. Again, the instruction will be styled as expected.
To achieve this I will add two new classes to GDB,
DisassemblerTextPart and DisassemblerAddressPart.
Within builtin_disassemble, instead of capturing the print calls from
libopcodes and building a single string, we will now create either a
text part or address part and store these parts in a vector.
The DisassemblerTextPart will capture a small piece of text along with
the associated style that should be used to display the text. This
corresponds to the disassembler calling
disassemble_info::fprintf_styled_func, or for disassemblers that don't
support styling disassemble_info::fprintf_func.
The DisassemblerAddressPart is used when libopcodes requests that an
address be printed, and takes care of printing the address and
associated symbol, this corresponds to the disassembler calling
disassemble_info::print_address_func.
These parts are then placed within the DisassemblerResult when
builtin_disassemble returns.
Alternatively, the user can directly create parts by calling two new
methods on the DisassembleInfo class: DisassembleInfo.text_part and
DisassembleInfo.address_part.
Having created these parts the user can then pass these parts when
initializing a new DisassemblerResult object.
Finally, when we return from Python to gdbpy_print_insn, one way or
another, the result being returned will have a list of parts. Back in
GDB's C++ code we walk the list of parts and call back into GDB's core
to display the disassembled instruction with the correct styling.
The new API lives in parallel with the old API. Any existing code
that creates a DisassemblerResult using a single string immediately
creates a single DisassemblerTextPart containing the entire
instruction and gives this part the default text style. This is also
what happens if the user calls builtin_disassemble for an architecture
that doesn't (yet) support libopcode styling.
This matches up with what happens when the Python API is not involved,
an architecture without disassembler styling support uses the old
libopcodes printing API (the API that doesn't pass style info), and
GDB just prints everything using the default text style.
The reason that parts are created by calling methods on
DisassembleInfo, rather than calling the class constructor directly,
is DisassemblerAddressPart. Ideally this part would only hold the
address which the part represents, but in order to support backwards
compatibility we need to be able to convert the
DisassemblerAddressPart into a string. To do that we need to call
GDB's internal print_address function, and to do that we need an
gdbarch.
What this means is that the DisassemblerAddressPart needs to take a
gdb.Architecture object at creation time. The only valid place a user
can pull this from is from the DisassembleInfo object, so having the
DisassembleInfo act as a factory ensures that the correct gdbarch is
passed over each time. I implemented both solutions (the one
presented here, and an alternative where parts could be constructed
directly), and this felt like the cleanest solution.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
This commit is a refactor ahead of the next change which will make
disassembler styling available through the Python API.
Unfortunately, in order to make the styling support available, I think
the easiest solution is to make a very small change to the existing
API.
The current API relies on returning a DisassemblerResult object to
represent each disassembled instruction. Currently GDB allows the
DisassemblerResult class to be sub-classed, which could mean that a
user tries to override the various attributes that exist on the
DisassemblerResult object.
This commit removes this ability, effectively making the
DisassemblerResult class final.
Though this is a change to the existing API, I'm hoping this isn't
going to cause too many issues:
- The Python disassembler API was only added in the previous release
of GDB, so I don't expect it to be widely used yet, and
- It's not clear to me why a user would need to sub-class the
DisassemblerResult type, I allowed it in the original patch
because at the time I couldn't see any reason to NOT allow it.
Having prevented sub-classing I can now rework the tail end of the
gdbpy_print_insn function; instead of pulling the results out of the
DisassemblerResult object by calling back into Python, I now cast the
Python object back to its C++ type (disasm_result_object), and access
the fields directly from there. In later commits I will be reworking
the disasm_result_object type in order to hold information about the
styled disassembler output.
The tests that dealt with sub-classing DisassemblerResult have been
removed, and a new test that confirms that DisassemblerResult can't be
sub-classed has been added.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
The cooked index scanner has special code to handle forward DIE
references. However, a bug report lead to the discovery that this
code does not work -- the "deferred_entry::spec_offset" field is
written to but never used, i.e., the lookup is done using the wrong
key.
This patch fixes the bug and adds a regression test.
The test in the bug itself used a thread_local variable, which
provoked a failure at runtime. This test instead uses "maint print
objfiles" and then inspects to ensure that the entry in question has a
parent. This lets us avoid a clang dependency in the test.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30271
If a function symbol only get its address by la.global, without
directly called by bl instruction, the PLT entry is not required.
bfd/ChangeLog:
* elfnn-loongarch.c (loongarch_elf_adjust_dynamic_symbol): Fix PLT
entry generate bug.
ld/ChangeLog:
* testsuite/ld-elf/shared.exp: Clear xfail for LoongArch.
Currently
print -elements=3 -- "AAAAAA"
prints complete string, which is not what the user asked for.
Fix two buggy tests exposed by the fix, and add a new test.
Reviewed-by: Keith Seitz <keiths@redhat.com>
Testing for NULL in pic_need_relax fixes the other call to this
function in md_estimate_size_before_relax.
PR 28955
* config/tc-mips.c (mips_frob_file): Move NULL sym test to..
(pic_need_relax): ..here.
The answer to PR28902 may be deduced from the existing INSERT
documentation that says the default script is parsed after the -T
INSERT script, if you assume (correctly) that nothing special is done
when inserting into -T scripts overriding the default script. In both
cases INSERT handling looks for the specified output section later on
the internal list of parsed script commands. This isn't obvious
though, so make the ordering explicit, and mention that section
assignments are the same too.
PR 28902
* ld.texi (INSERT): Specify ordering when -T is used both to
override the default script and to augment.
A co-worker here at AdaCore discovered that the Pragma Import series
caused a rgression. When debugging gnat1, gdb started asking for
overload resolution like:
(gdb) call pp(n)
Multiple matches for pp
[0] cancel
[1] pp (types.union_id) at ../../gcc/gcc/ada/treepr.adb:511
[2] treepr.pp (types.union_id) at ../../gcc/gcc/ada/treepr.adb:511
This worked before the series, and is strange anyway, because the
matches refer to the same function.
This patch adds a test case for this situation and fixes the bug by
pruning identical functions in remove_extra_symbols.
Ada can import C APIs and also export Ada constructs to C via Pragma
Import and Pragma Export. This patch adds support for these to gdb,
by arranging to either defer some aspects of a symbol to the
underlying C symbol (for Import) or by introducing a second symbol
(for Export). A somewhat tricky approach is needed, both because gdb
doesn't generally handle symbol aliasing, and because Ada treats
symbol names in an unusual way (as compared to the rest of gdb).
This moves the definition of symbol::value_block outside of the class.
A subsequent patch will change this method to use SYMBOL_BLOCK_OPS,
and it seemed simplest to move this method out-of-line, and cleaner to
do this as a separate change.
A subsequent patch will introduce more aclass registrations, causing
the number to go over the current maximum. This bumps the number.
Note that there's a separate static assert that ensures that this
number doesn't get too large for the field size in the symbol.