This reverts commit 7cab91b87d.
Now when we have a real hardware platform with PAE40 enabled
(here I mean axs103 with firmware v1.2) and 1 Gb of DDR mapped to
0x1_a000_0000-0x1_ffff_ffff we're really targeting memory above 4Gb
when PAE40 is enabled. This in its turn requires HIGHMEM to be enabled
otherwise user won't see any difference with enabling PAE in
kernel configuration as only lowmem will be used anyways.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Accumulator is present in configs with FPU and/or DSP MPY (mpy > 6)
Instead of doing this in pt_regs (and thus every kernel entry/exit),
this could have been done in context switch (and for user task only) as
currently kernel doesn't clobber these registers for its own accord.
However we will soon start using 64-bit multiply instructions for kernel
which can clobber these. Also gcc folks also plan to start using these
as GPRs, hence better to always save/restore them
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
A typical SMP system expects cache coherency. Initial NPS platform
support was slated to be SMP w/o cache coherency.
However it seems the platform now selects that option, so there is no
point in keeping it around.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Currently Kconfig knob ARC_NUMBER_OF_INTERRUPTS is used as indicator of
hard irq count. But it is flawed that it doesn't affect
- NR_IRQS : for number of virtual interrupts
- NR_CPU_IRQS : for number of hardware interrupts
Moreover the actual hardware irq count might still not be same as
ARC_NUMBER_OF_INTERRUPTS. So use the information availble in the
Build Configuration Registers and get rid of the Kconfig option.
We still need "some" build time info about irq count to set up
sufficient number of vector table entries. This is done with a
sufficiently large NR_CPU_IRQS which will eventually be used soley for
that purpose (subsequent patches will remove its usage elsewhere)
So to summarize what this patch does:
* NR_CPU_IRQS defines a maximum number of hardware interrupts.
* Remove ARC_NUMBER_OF_INTERRUPTS option and create interrupts
table for all possible hardware interrupts.
* Increase a maximum number of virtual IRQs to 512. ARCv2 can
support 240 interrupts in the core interrupts controllers
and 128 interrupts in IDU. Thus 512 virtual IRQs must be
enough for most configurations of boards.
This patch leads to NR_CPU_IRQS in 2 places, to reduce the overall
churn. The next patch will remove the 2nd definition anyways.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
[vgupta: reworked the changelog a bit]
commit d65283f7b6 added mod->arch.secstr under
CONFIG_ARC_DW2_UNWIND, but used it unconditionally which broke builds
when the option was disabled. Fix that by adjusting the #ifdef guard.
And while at it add a missing guard (for unwinder) in module.c as well
Reported-by: Waldemar Brodkorb <wbx@openadk.org>
Cc: stable@vger.kernel.org #4.9
Fixes: d65283f7b6 ("ARC: module: elide loop to save reference to .eh_frame")
Tested-by: Anton Kolesov <akolesov@synopsys.com>
Reviewed-by: Alexey Brodkin <abrodkin@synopsys.com>
[abrodkin: provided fixlet to Kconfig per failure in allnoconfig build]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This adds support for
- CONFIG_ARC_TIMERS : legacy 32-bit TIMER0 and TIMER1 which count UP
from @CNT to @LIMIT, before optionally triggering an interrupt.
These are programmed using ARC auxiliary register interface.
These are present in all ARC cores (ARC700 and ARC HS38)
TIMER0 serves as clockevent for all ARC linux builds.
TIMER1 is used for clocksource in arc700 builds.
- CONFIG_ARC_TIMERS_64BIT: 64-bit counters, RTC and GFRC found in
ARC HS38 cores. These are independnet IP blocks with different
programming model respectively.
Link: http://lkml.kernel.org/r/20161111231132.GA4186@mai
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The original distinction was done as they were developed at different
times and primarily because they are specific to UP (RTC) and SMP (GFRC).
But given that driver handles that at runtime, (i.e. not allowing
RTC as clocksource in SMP), we can simplify things a bit.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Add support for lzma compressed uImage.
Support for gzip was already available but could not be enabled because
we were missing CONFIG_HAVE_KERNEL_GZIP in arch/arc/Kconfig.
Signed-off-by: Daniel Mentz <danielmentz@google.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The IDU intc is technically part of MCIP (Multi-core IP) hence
historically was only available in a SMP hardware build (and thus only
in a SMP kernel build). Now that hardware restriction has been lifted,
so a UP kernel needs to support it.
This requires breaking mcip.c into parts which are strictly SMP
(inter-core interrupts) and IDU which in reality is just another
intc and thus has no bearing on SMP.
This change allows IDU in UP builds and with a suitable device tree, we
can have the cascaded intc system
ARCv2 core intc <---> ARCv2 IDU intc <---> periperals
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Seem like values assigned as absolute number and not and
shift value, i.e. should be 0 for one node (2^0) and 1 for
couple of nodes (2^1)
Signed-off-by: Noam Camus <noamca@mellanox.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARCv2 ISA provides 64-bit exclusive load/stores so use them to implement
the 64-bit atomics and elide the spinlock based generic 64-bit atomics
boot tested with atomic64 self-test (and GOD bless the person who wrote
them, I realized my inline assmebly is sloppy as hell)
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This reverts commit e78fdfef84.
The issue was fixed in hardware in HS2.1C release and there are no known
external users of affected RTL so revert the whole delayed retry series !
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARC700 support for 2 interrupt priorities historically allowed even slow
perpherals such as emac and uart to setup high priority interrupts
which was wrong from the beginning as they could possibly delay the more
critical timer interrupt.
The hardware support for 2 level interrupts in ARCompact is less than
ideal anyways (judging from the "hacks" in low level entry code and thus
is not used in productions systems I know of.
So reduce the scope of this to timer only, thereby reducing a bunch of
complexity.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The binary GCD algorithm is based on the following facts:
1. If a and b are all evens, then gcd(a,b) = 2 * gcd(a/2, b/2)
2. If a is even and b is odd, then gcd(a,b) = gcd(a/2, b)
3. If a and b are all odds, then gcd(a,b) = gcd((a-b)/2, b) = gcd((a+b)/2, b)
Even on x86 machines with reasonable division hardware, the binary
algorithm runs about 25% faster (80% the execution time) than the
division-based Euclidian algorithm.
On platforms like Alpha and ARMv6 where division is a function call to
emulation code, it's even more significant.
There are two variants of the code here, depending on whether a fast
__ffs (find least significant set bit) instruction is available. This
allows the unpredictable branches in the bit-at-a-time shifting loop to
be eliminated.
If fast __ffs is not available, the "even/odd" GCD variant is used.
I use the following code to benchmark:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <time.h>
#include <unistd.h>
#define swap(a, b) \
do { \
a ^= b; \
b ^= a; \
a ^= b; \
} while (0)
unsigned long gcd0(unsigned long a, unsigned long b)
{
unsigned long r;
if (a < b) {
swap(a, b);
}
if (b == 0)
return a;
while ((r = a % b) != 0) {
a = b;
b = r;
}
return b;
}
unsigned long gcd1(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
b >>= __builtin_ctzl(b);
for (;;) {
a >>= __builtin_ctzl(a);
if (a == b)
return a << __builtin_ctzl(r);
if (a < b)
swap(a, b);
a -= b;
}
}
unsigned long gcd2(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
r &= -r;
while (!(b & r))
b >>= 1;
for (;;) {
while (!(a & r))
a >>= 1;
if (a == b)
return a;
if (a < b)
swap(a, b);
a -= b;
a >>= 1;
if (a & r)
a += b;
a >>= 1;
}
}
unsigned long gcd3(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
b >>= __builtin_ctzl(b);
if (b == 1)
return r & -r;
for (;;) {
a >>= __builtin_ctzl(a);
if (a == 1)
return r & -r;
if (a == b)
return a << __builtin_ctzl(r);
if (a < b)
swap(a, b);
a -= b;
}
}
unsigned long gcd4(unsigned long a, unsigned long b)
{
unsigned long r = a | b;
if (!a || !b)
return r;
r &= -r;
while (!(b & r))
b >>= 1;
if (b == r)
return r;
for (;;) {
while (!(a & r))
a >>= 1;
if (a == r)
return r;
if (a == b)
return a;
if (a < b)
swap(a, b);
a -= b;
a >>= 1;
if (a & r)
a += b;
a >>= 1;
}
}
static unsigned long (*gcd_func[])(unsigned long a, unsigned long b) = {
gcd0, gcd1, gcd2, gcd3, gcd4,
};
#define TEST_ENTRIES (sizeof(gcd_func) / sizeof(gcd_func[0]))
#if defined(__x86_64__)
#define rdtscll(val) do { \
unsigned long __a,__d; \
__asm__ __volatile__("rdtsc" : "=a" (__a), "=d" (__d)); \
(val) = ((unsigned long long)__a) | (((unsigned long long)__d)<<32); \
} while(0)
static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long),
unsigned long a, unsigned long b, unsigned long *res)
{
unsigned long long start, end;
unsigned long long ret;
unsigned long gcd_res;
rdtscll(start);
gcd_res = gcd(a, b);
rdtscll(end);
if (end >= start)
ret = end - start;
else
ret = ~0ULL - start + 1 + end;
*res = gcd_res;
return ret;
}
#else
static inline struct timespec read_time(void)
{
struct timespec time;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time);
return time;
}
static inline unsigned long long diff_time(struct timespec start, struct timespec end)
{
struct timespec temp;
if ((end.tv_nsec - start.tv_nsec) < 0) {
temp.tv_sec = end.tv_sec - start.tv_sec - 1;
temp.tv_nsec = 1000000000ULL + end.tv_nsec - start.tv_nsec;
} else {
temp.tv_sec = end.tv_sec - start.tv_sec;
temp.tv_nsec = end.tv_nsec - start.tv_nsec;
}
return temp.tv_sec * 1000000000ULL + temp.tv_nsec;
}
static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long),
unsigned long a, unsigned long b, unsigned long *res)
{
struct timespec start, end;
unsigned long gcd_res;
start = read_time();
gcd_res = gcd(a, b);
end = read_time();
*res = gcd_res;
return diff_time(start, end);
}
#endif
static inline unsigned long get_rand()
{
if (sizeof(long) == 8)
return (unsigned long)rand() << 32 | rand();
else
return rand();
}
int main(int argc, char **argv)
{
unsigned int seed = time(0);
int loops = 100;
int repeats = 1000;
unsigned long (*res)[TEST_ENTRIES];
unsigned long long elapsed[TEST_ENTRIES];
int i, j, k;
for (;;) {
int opt = getopt(argc, argv, "n:r:s:");
/* End condition always first */
if (opt == -1)
break;
switch (opt) {
case 'n':
loops = atoi(optarg);
break;
case 'r':
repeats = atoi(optarg);
break;
case 's':
seed = strtoul(optarg, NULL, 10);
break;
default:
/* You won't actually get here. */
break;
}
}
res = malloc(sizeof(unsigned long) * TEST_ENTRIES * loops);
memset(elapsed, 0, sizeof(elapsed));
srand(seed);
for (j = 0; j < loops; j++) {
unsigned long a = get_rand();
/* Do we have args? */
unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand();
unsigned long long min_elapsed[TEST_ENTRIES];
for (k = 0; k < repeats; k++) {
for (i = 0; i < TEST_ENTRIES; i++) {
unsigned long long tmp = benchmark_gcd_func(gcd_func[i], a, b, &res[j][i]);
if (k == 0 || min_elapsed[i] > tmp)
min_elapsed[i] = tmp;
}
}
for (i = 0; i < TEST_ENTRIES; i++)
elapsed[i] += min_elapsed[i];
}
for (i = 0; i < TEST_ENTRIES; i++)
printf("gcd%d: elapsed %llu\n", i, elapsed[i]);
k = 0;
srand(seed);
for (j = 0; j < loops; j++) {
unsigned long a = get_rand();
unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand();
for (i = 1; i < TEST_ENTRIES; i++) {
if (res[j][i] != res[j][0])
break;
}
if (i < TEST_ENTRIES) {
if (k == 0) {
k = 1;
fprintf(stderr, "Error:\n");
}
fprintf(stderr, "gcd(%lu, %lu): ", a, b);
for (i = 0; i < TEST_ENTRIES; i++)
fprintf(stderr, "%ld%s", res[j][i], i < TEST_ENTRIES - 1 ? ", " : "\n");
}
}
if (k == 0)
fprintf(stderr, "PASS\n");
free(res);
return 0;
}
Compiled with "-O2", on "VirtualBox 4.4.0-22-generic #38-Ubuntu x86_64" got:
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 10174
gcd1: elapsed 2120
gcd2: elapsed 2902
gcd3: elapsed 2039
gcd4: elapsed 2812
PASS
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 9309
gcd1: elapsed 2280
gcd2: elapsed 2822
gcd3: elapsed 2217
gcd4: elapsed 2710
PASS
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 9589
gcd1: elapsed 2098
gcd2: elapsed 2815
gcd3: elapsed 2030
gcd4: elapsed 2718
PASS
zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
gcd0: elapsed 9914
gcd1: elapsed 2309
gcd2: elapsed 2779
gcd3: elapsed 2228
gcd4: elapsed 2709
PASS
[akpm@linux-foundation.org: avoid #defining a CONFIG_ variable]
Signed-off-by: Zhaoxiu Zeng <zhaoxiu.zeng@gmail.com>
Signed-off-by: George Spelvin <linux@horizon.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
On ARC, lower 2G of address space is translated and used for
- user vaddr space (region 0 to 5)
- unused kernel-user gutter (region 6)
- kernel vaddr space (region 7)
where each region simply represents 256MB of address space.
The kernel vaddr space of 256MB is used to implement vmalloc, modules
So far this was enough, but not on EZChip system with 4K CPUs (given
that per cpu mechanism uses vmalloc for allocating chunks)
So allow VMALLOC_SIZE to be configurable by expanding down into the unused
kernel-user gutter region which at default 256M was excessive anyways.
Also use _BITUL() to fix a build error since PGDIR_SIZE cannot use "1UL"
as called from assembly code in mm/tlbex.S
Signed-off-by: Noam Camus <noamc@ezchip.com>
[vgupta: rewrote changelog, debugged bootup crash due to int vs. hex]
Acked-by: Vineet Gupta <vgupta@synopsys.com>
The primary interrupt handler arch_do_IRQ() was passing hwirq as linux
virq to core code. This was fragile and worked so far as we only had legacy/linear
domains.
This came out of a rant by Marc Zyngier.
http://lists.infradead.org/pipermail/linux-snps-arc/2015-December/000298.html
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
- call clocksource_probe()
- This in turns needs of_clk_init() to be called earlier
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Noam Camus <noamc@ezchip.com>
[vgupta: broken off from a bigger patch]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Initial HIGHMEM support on ARC was introduced for PAE40 where the low
memory (0x8000_0000 based) and high memory (0x1_0000_0000) were
physically contiguous. So CONFIG_FLATMEM sufficed (despite a peipheral
hole in the middle, which wasted a bit of struct page memory, but things
worked).
However w/o PAE, highmem was not possible and we could only reach
~1.75GB of DDR. Now there is a use case to access ~4GB of DDR w/o PAE40
The idea is to have low memory at canonical 0x8000_0000 and highmem
at 0 so enire 4GB address space is available for physical addressing
This needs additional platform/interconnect mapping to convert
the non contiguous physical addresses into linear bus adresses.
From Linux point of view, non contiguous divide means FLATMEM no
longer works and DISCONTIGMEM is needed to track the pfns in the 2
regions.
This scheme would also work for PAE40, only better in that we don't
waste struct page memory for the peripheral hole.
The DT description will be something like
memory {
...
reg = <0x80000000 0x200000000 /* 512MB: lowmem */
0x00000000 0x10000000>; /* 256MB: highmem */
}
Signed-off-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Commit 5f8fc43217 ("PCI: Include pci/pcie/Kconfig directly from
pci/Kconfig") in linux-next changed drivers/pci/Kconfig to include
drivers/pci/pcie/Kconfig itself, so that architectures do not need
to source both files themselves. ARC just recently gained PCI support
through commit 6b3fb77998dd ("ARC: Add PCI support"), but this change
was based on the old behaviour of the Kconfig files. This makes
Kconfig now spit out the following warnings:
drivers/pci/pcie/Kconfig:61:warning: choice value used outside its choice group
drivers/pci/pcie/Kconfig:67:warning: choice value used outside its choice group
drivers/pci/pcie/Kconfig:74:warning: choice value used outside its choice group
This change updates the Kconfig file for ARC, dropping the now
unnecessary 'source' statement, which makes the warning disappear.
Signed-off-by: Andreas Ziegler <andreas.ziegler@fau.de>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
- Big Endian io accessors fix [Lada]
- Spellos fixes [Adam]
- Fix for DW GMAC breakage [Alexey]
- Making DMA API 64-bit ready
- Shutting up -Wmaybe-uninitialized noise for ARC
- Other minor fixes here and there, comments update
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJW738lAAoJEGnX8d3iisJeProP/icm32aIHY0QXmJCBXCmQfLa
HHzfBeJ2KsG8pIRgrvraK3FJkmFr+WxZ7x6b5hPNYeHIT3c179/GZ3DlssM1md0u
sa50o5jmwd/J4o5jCKpUB/hx7wiAjpC2CYb6qIg39A2Nq5JhOFJV30XMbCscXkLI
ae/o8oATi1502cf1OQ2EqNWKfME4ogG1KsEUNrSzcd+1P8LZxsnEVBmXuPHVdHLw
kTHVgmCELsEchaV/QY9pY+uHkm9Y4vV18v0vqbklwED+cHkjmXQ2UysP3/J8KXKN
PVSqmtUJIS2vxDGK5mWvz6jkWmU8gRXoT14ZqdmMARmhVhp3+JTm2fQ53NUwZ+b2
JpPNGWVQRi86AaiUE8Fm+eWjC242CAm+lsBfx+mvqWpEvFGMlnRKw8oZiyeJhhIw
3M1yrulQG7QbTSuQrgQwfGqtrhl2nnq+X0uoMJXYHupNDQ42QK8wmJ9bT7cmutD0
K3Tmi84qoiSnN/HhWK/D9d60bLGvUY4RKiLjAcJz7lbMjtRhT/rpFFcFYCIhJyZs
y//jOZK67o1ecDXBTaUcvT+edOrQVsmatn3w0p9VwATe8OiKHsLA/0UD34gwiECy
o9g/i4tc2GfOLFoLv66czXTU9IuoKDh3HrTJgET7r1Re/+FKgJ+2+GX6AbiJzbhY
9jsAAI/ZpsS6qMhvSz3d
=n0fk
-----END PGP SIGNATURE-----
Merge tag 'arc-4.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC architecture updates from Vineet Gupta:
- Big Endian io accessors fix [Lada]
- Spellos fixes [Adam]
- Fix for DW GMAC breakage [Alexey]
- Making DMA API 64-bit ready
- Shutting up -Wmaybe-uninitialized noise for ARC
- Other minor fixes here and there, comments update
* tag 'arc-4.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: (21 commits)
ARCv2: ioremap: Support dynamic peripheral address space
ARC: dma: reintroduce platform specific dma<->phys
ARC: dma: ioremap: use phys_addr_t consistenctly in code paths
ARC: dma: pass_phys() not sg_virt() to cache ops
ARC: dma: non-coherent pages need V-P mapping if in HIGHMEM
ARC: dma: Use struct page based page allocator helpers
ARC: build: Turn off -Wmaybe-uninitialized for ARC gcc 4.8
ARC: [plat-axs10x] add Ethernet PHY description in .dts
arc: use of_platform_default_populate() to populate default bus
ARC: thp: unbork !CONFIG_TRANSPARENT_HUGEPAGE build
arc: [plat-nsimosci*] use ezchip network driver
ARCv2: LLSC: software backoff is NOT needed starting HS2.1c
ARC: mm: Use virt_to_pfn() for addr >> PAGE_SHIFT pattern
ARC: [plat-nsim] document ranges
ARC: build: Better way to detect ISA compatible toolchain
ARCv2: Allow enabling PAE40 w/o HIGHMEM
ARC: [BE] readl()/writel() to work in Big Endian CPU configuration
ARC: [*defconfig] No need to specify CONFIG_CROSS_COMPILE
ARC: [BE] Select correct CROSS_COMPILE prefix
ARC: bitops: Remove non relevant comments
...
This allows for regression testing in PAE specific code as we lack
a 32+ bit physical memory platform other than nSIM.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Add PCI support to ARC and update drivers/pci Makefile enabling the ARC
arch to use the generic PCI setup functions.
[bhelgaas: fold in Joao's pci-dma-compat.h & pci-bridge.h build fix (I
should have caught this myself, sorry]
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Even though DEVTMPFS is required when our pre-built initramfs
is used it is not the case in general. It is perfectly possible
to use initramfs with device nodes already populated or there
could be other usages, see discussion below for more detials:
http://thread.gmane.org/gmane.comp.embedded.openwrt.devel/37819/focus=37821
This change removes mentioned dependency from arch/arc/Kconfig
updating instead those defconfigs that are usually used with this
kind of pre-build initramfs.
And while at it all touched defconfigs were regenerated via
savedefconfig and some options were removed:
* USB is selected by other options implicitly
* VGA_CONSOLE is disableb for ARC since
031e29b587
* EXT3_FS automatically selects EXT4_FS
* MTDxxx and JFFS2_FS make no sense for AXS because
AXS NAND controller is not upstreamed
* NET_OSCI_LAN is not in upstream as well
* ARCPGU_xxx options make no sense because ARC PGU is not yet
in upstream and when it gets there all config options would
be taken from devicetree
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
It is unlikely that designs running Linux will not have multiplier.
Further the current support is not complete as tool don't generate a
multilib w/o multiplier.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
- Corner case of returning to delay slot from interrupt
- Changing default interrupt prioiry level
- Kconfig'ize support for super pages
- Other minor fixes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWvxdUAAoJEGnX8d3iisJeRkYP/2HZAt4J6c5MPk/NSy8rabVX
2bB1m5jYXlBmJAIsmWm+WcDL72MdrB1Owtc5tEN+hIoQQa2QQpxolp32IslHg0o8
C9CCzmF+iR8wz3caVk3javpsbze23XbHho/kdx/l2Ed3Fi+syI/9jF1GiboydRtR
X22an1lslA6Y44pYxFmSFcMCv7XclFkJNe1ltxsgN9/QapnNrE/HWqUIy+SMr2Oo
Tpo3m/Dc+IfMMejYyupc3keyAhyeux69lJXPuOzYiurgGUIyXz15Un2mQ9gZWf0u
W56L/55VpQVuah46qrp5CBTLmdJA5cBqr0F8RqmZAqrEYLgn5SD4IhDjamo1qsP/
FfFh0cG955SoEyCsUOPILWUFR5TeS4rJK+ZJjErUb+dwEC1BWZR0/Dn1s9KJN8b7
GgGV8yXruDACFlFnCqnlxVs1TKOPOUqD2NZRAdsKunp+ywNrvGdD43xWONcriyvr
2KW0nb+mH3RRk8HQzKjfqsVhLMoR7n1MD/+tg8ME8usLn1ik0hBerT56CX0Wh/yQ
VnOUX6xqlaRydeJJgCUyByz3+jJVvj8sk/VZbr19F0p9id6wpiPQeNus2AcoHFKW
OyvWcfxzqKegXrYtMsy8IoFzx73zJaXV3ht0I09rhAj3JkdF7vFEIUpKIhsWqxAK
yWKKqLcVKga/2Yc8jduI
=FNDd
-----END PGP SIGNATURE-----
Merge tag 'arc-4.5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC fixes from Vineet Gupta:
"I've been sitting on some of these fixes for a while.
- Corner case of returning to delay slot from interrupt
- Changing default interrupt prioiry level
- Kconfig'ize support for super pages
- Other minor fixes"
* tag 'arc-4.5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
ARC: mm: Introduce explicit super page size support
ARCv2: intc: Allow interruption by lowest priority interrupt
ARCv2: Check for LL-SC livelock only if LLSC is enabled
ARC: shrink cpuinfo by not saving full timer BCR
ARCv2: clocksource: Rename GRTC -> GFRC ...
ARCv2: STAR 9000950267: Handle return from intr to Delay Slot #2
MMUv4 supports 2 concurrent page sizes: Normal and Super [4K to 16M]
So far Linux supported a single super page size for a given Normal page,
depending on the software page walking address split.
e.g. we had 11:8:13 address split for 8K page, which meant super page
was 2 ^(8+13) = 2M (given that THP size has to be PMD_SHIFT)
Now we turn this around, by allowing multiple Super Pages in Kconfig
(currently 2M and 16M only) and forcing page walker address split to
PGDIR_SHIFT and PAGE_SHIFT
For configs without Super page, things are same as before and
PGDIR_SHIFT can be hacked to get non default address split
The motivation for this change is a customer who needs 16M super page
and a 8K Normal page combo.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
As illustrated by commit a3afe70b83 ("[S390] latencytop s390
support."), HAVE_LATENCYTOP_SUPPORT is defined by an architecture to
advertise an implementation of save_stack_trace_tsk.
However, as of 9212ddb5ea ("stacktrace: provide save_stack_trace_tsk()
weak alias") a dummy implementation is provided if STACKTRACE=y. Given
that LATENCYTOP already depends on STACKTRACE_SUPPORT and selects
STACKTRACE, we can remove HAVE_LATENCYTOP_SUPPORT altogether.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Helge Deller <deller@gmx.de>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
HIGHMEM support bumped the default memory size for nsim platform to 1G.
Thus total memory ended at the very edge of start of peripherals address
space. With linux link base shifted, memory started bleeding into
peripheral space which caused early boot bad_page spew !
Fixes: 29e332261d ("ARC: mm: HIGHMEM: populate high memory from DT")
Reported-by: Anton Kolesov <akolesov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This is the first working implementation of 40-bit physical address
extension on ARCv2.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Implement kmap* API for ARC.
This enables
- permanent kernel maps (pkmaps): :kmap() API
- fixmap : kmap_atomic()
We use a very simple/uniform approach for both (unlike some of the other
arches). So fixmap doesn't use the customary compile time address stuff.
The important semantic is sleep'ability (pkmap) vs. not (fixmap) which
the API guarantees.
Note that this patch only enables highmem for subsequent PAE40 support
as there is no real highmem for ARC in pure 32-bit paradigm as explained
below.
ARC has 2:2 address split of the 32-bit address space with lower half
being translated (virtual) while upper half unstranslated
(0x8000_0000 to 0xFFFF_FFFF). kernel itself is linked at base of
unstranslated space (i.e. 0x8000_0000 onwards), which is mapped to say
DDR 0x0 by external Bus Glue logic (outside the core). So kernel can
potentially access 1.75G worth of memory directly w/o need for highmem.
(the top 256M is taken by uncached peripheral space from 0xF000_0000 to
0xFFFF_FFFF)
In PAE40, hardware can address memory beyond 4G (0x1_0000_0000) while
the logical/virtual addresses remain 32-bits. Thus highmem is required
for kernel proper to be able to access these pages for it's own purposes
(user space is agnostic to this anyways).
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
For Run-on-reset, non masters need to spin wait. For Halt-on-reset they
can jump to entry point directly.
Also while at it, made reset vector handler as "the" entry point for
kernel including host debugger based boot (which uses the ELF header
entry point)
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
MMUv4 in HS38x cores supports Super Pages which are basis for Linux THP
support.
Normal and Super pages can co-exist (ofcourse not overlap) in TLB with a
new bit "SZ" in TLB page desciptor to distinguish between them.
Super Page size is configurable in hardware (4K to 16M), but fixed once
RTL builds.
The exact THP size a Linx configuration will support is a function of:
- MMU page size (typical 8K, RTL fixed)
- software page walker address split between PGD:PTE:PFN (typical
11:8:13, but can be changed with 1 line)
So for above default, THP size supported is 8K * 256 = 2M
Default Page Walker is 2 levels, PGD:PTE:PFN, which in THP regime
reduces to 1 level (as PTE is folded into PGD and canonically referred
to as PMD).
Thus thp PMD accessors are implemented in terms of PTE (just like sparc)
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARC doesn't need the runtime detection of futex cmpxchg op
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This is to workaround the llock/scond livelock
HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping
coherency transactions in the SCU. The exclusive line state keeps rotating
among contenting cores leading to a never ending cycle. So break the cycle
by deferring the retry of failed exclusive access (SCOND). The actual delay
needed is function of number of contending cores as well as the unrelated
coherency traffic from other cores. To keep the code simple, start off with
small delay of 1 which would suffice most cases and in case of contention
double the delay. Eventually the delay is sufficient such that the coherency
pipeline is drained, thus a subsequent exclusive access would succeed.
Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.com
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Being highly configurable core ARC HS among other features might be
configured with or without DIV_REM_OPTION (hardware divider).
That option when enabled adds following instructions: div, divu, rem, remu.
By default ARC HS38 has this option enabled. So we add here possibility
to disable usage of hardware divider by compiler.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>