2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2025-01-02 10:43:57 +08:00
Commit Graph

111764 Commits

Author SHA1 Message Date
Alexey Brodkin
36481cf7fb ARCv2: perf: Support sampling events using overflow interrupts
In times of ARC 700 performance counters didn't have support of
interrupt an so for ARC we only had support of non-sampling events.

Put simply only "perf stat" was functional.

Now with ARC HS we have support of interrupts in performance counters
which this change introduces support of.

ARC performance counters act in the following way in regard of
interrupts generation.
 [1] A counter counts starting from value set in PCT_COUNT register pair
 [2] Once counter reaches value set in PCT_INT_CNT interrupt is raised

Basic setup look like this:
 [1] PCT_COUNT = 0;
 [2] PCT_INT_CNT = __limit_value__;
 [3] Enable interrupts for that counter and let it run
 [4] Let counter reach its limit
 [5] Handle interrupt when it happens

Note that PCT HW block is build in CPU core and so ints interrupt
line (which is basically OR of all counters IRQs) is wired directly to
top-level IRQC. That means do de-assert PCT interrupt it's required to
reset IRQs from all counters that have reached their limit values.

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:57:43 +05:30
Alexey Brodkin
1fe8bfa5ff ARCv2: perf: implement "event_set_period"
This generalization prepares for support of overflow interrupts.

Hardware event counters on ARC work that way:
Each counter counts from programmed start value (set in
ARC_REG_PCT_COUNT) to a limit value (set in ARC_REG_PCT_INT_CNT) and
once limit value is reached this timer generates an interrupt.

Even though this hardware implementation allows for more flexibility,
in Linux kernel we decided to mimic behavior of other architectures
this way:

 [1] Set limit value as half of counter's max value (to allow counter to
     run after reaching it limit, see below for more explanation):
 ---------->8-----------
 arc_pmu->max_period = (1ULL << counter_size) / 2 - 1ULL;
 ---------->8-----------

 [2] Set start value as "arc_pmu->max_period - sample_period" and then
count up to the limit

Our event counters don't stop on reaching max value (the one we set in
ARC_REG_PCT_INT_CNT) but continue to count until kernel explicitly
stops each of them.

And setting a limit as half of counter capacity is done to allow
capturing of additional events in between moment when interrupt was
triggered until we're actually processing PMU interrupts. That way
we're trying to be more precise.

For example if we count CPU cycles we keep track of cycles while
running through generic IRQ handling code:

 [1] We set counter period as say 100_000 events of type "crun"
 [2] Counter reaches that limit and raises its interrupt
 [3] Once we get in PMU IRQ handler we read current counter value from
ARC_REG_PCT_SNAP ans see there something like 105_000.

If counters stop on reaching a limit value then we would miss
additional 5000 cycles.

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:57:29 +05:30
Vineet Gupta
fb7c572551 ARC: perf: cap the number of counters to hardware max of 32
The number of counters in PCT can never be more than 32 (while
countable conditions could be 100+) for both ARCompact and ARCv2

And while at it update copyright dates.

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:57:03 +05:30
Vineet Gupta
fd0881a24a ARC: Eliminate some ARCv2 specific code for ARCompact build
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-21 15:06:43 +05:30
Vineet Gupta
090749502f ARC: add/fix some comments in code - no functional change
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 19:05:49 +05:30
Yuriy Kolerov
6de6066c0d ARC: change some branchs to jumps to resolve linkage errors
When kernel's binary becomes large enough (32M and more) errors
may occur during the final linkage stage. It happens because
the build system uses short relocations for ARC  by default.
This problem may be easily resolved by passing -mlong-calls
option to GCC to use long absolute jumps (j) instead of short
relative branchs (b).

But there are fragments of pure assembler code exist which use
branchs in inappropriate places and cause a linkage error because
of relocations overflow.

First of these fragments is .fixup insertion in futex.h and
unaligned.c. It inserts a code in the separate section (.fixup)
with branch instruction. It leads to the linkage error when
kernel becomes large.

Second of these fragments is calling scheduler's functions
(common kernel code) from entry.S of ARC's code. When kernel's
binary becomes large it may lead to the linkage error because
scheduler may occur far enough from ARC's code in the final
binary.

Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:53:15 +05:30
Vineet Gupta
eb2cd8b72b ARC: ensure futex ops are atomic in !LLSC config
W/o hardware assisted atomic r-m-w the best we can do is to disable
preemption.

Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:16:01 +05:30
Vineet Gupta
5e0574292a ARC: Enable HAVE_FUTEX_CMPXCHG
ARC doesn't need the runtime detection of futex cmpxchg op

Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:16:01 +05:30
Vineet Gupta
882a95ae0a ARC: make futex_atomic_cmpxchg_inatomic() return bimodal
Callers of cmpxchg_futex_value_locked() in futex code expect bimodal
return value:
  !0 (essentially -EFAULT as failure)
   0 (success)

Before this patch, the success return value was old value of futex,
which could very well be non zero, causing caller to possibly take the
failure path erroneously.

Fix that by returning 0 for success

(This fix was done back in 2011 for all upstream arches, which ARC
obviously missed)

Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:16:00 +05:30
Vineet Gupta
ed574e2bbd ARC: futex cosmetics
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:16:00 +05:30
Vineet Gupta
31d30c8208 ARC: add barriers to futex code
The atomic ops on futex need to provide the full barrier just like
regular atomics in kernel.

Also remove pagefault_enable/disable in futex_atomic_cmpxchg_inatomic()
as core code already does that

Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:15:59 +05:30
Alexey Brodkin
1648c70d30 ARCv2: IOC: Allow boot time disable
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:15:31 +05:30
Vineet Gupta
79335a2ca0 ARCv2: SLC: Allow boot time disable
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:11:52 +05:30
Alexey Brodkin
f2b0b25a37 ARCv2: Support IO Coherency and permutations involving L1 and L2 caches
In case of ARCv2 CPU there're could be following configurations
that affect cache handling for data exchanged with peripherals
via DMA:
 [1] Only L1 cache exists
 [2] Both L1 and L2 exist, but no IO coherency unit
 [3] L1, L2 caches and IO coherency unit exist

Current implementation takes care of [1] and [2].
Moreover support of [2] is implemented with run-time check
for SLC existence which is not super optimal.

This patch introduces support of [3] and rework of DMA ops
usage. Instead of doing run-time check every time a particular
DMA op is executed we'll have 3 different implementations of
DMA ops and select appropriate one during init.

As for IOC support for it we need:
 [a] Implement empty DMA ops because IOC takes care of cache
     coherency with DMAed data
 [b] Route dma_alloc_coherent() via dma_alloc_noncoherent()
     This is required to make IOC work in first place and also
     serves as optimization as LD/ST to coherent buffers can be
     srviced from caches w/o going all the way to memory

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
[vgupta:
  -Added some comments about IOC gains
  -Marked dma ops as static,
  -Massaged changelog a bit]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:11:17 +05:30
Vineet Gupta
2a4401687c ARC: Enable optimistic spinning for LLSC config
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-11 14:51:09 +05:30
Linus Torvalds
3fbdc37956 Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS fixes from Ralf Baechle:
 "Another round of MIPS fixes for 4.2.  No area does particularly stand
  out but we have a two unpleasant ones:

   - Kernel ptes are marked with a global bit which allows the kernel to
     share kernel TLB entries between all processes.  For this to work
     both entries of an adjacent even/odd pte pair need to have the
     global bit set.  There has been a subtle race in setting the other
     entry's global bit since ~ 2000 but it take particularly
     pathological workloads that essentially do mostly vmalloc/vfree to
     trigger this.

     This pull request fixes the 64-bit case but leaves the case of 32
     bit CPUs with 64 bit ptes unsolved for now.  The unfixed cases
     affect hardware that is not available in the field yet.

   - Instruction emulation requires loading instructions from user space
     but the current fast but simplistic approach will fail on pages
     that are PROT_EXEC but !PROT_READ.  For this reason we temporarily
     do not permit this permission and will map pages with PROT_EXEC |
     PROT_READ.

  The remainder of this pull request is more or less across the field
  and the short log explains them well"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
  MIPS: Make set_pte() SMP safe.
  MIPS: Replace add and sub instructions in relocate_kernel.S with addiu
  MIPS: Flush RPS on kernel entry with EVA
  Revert "MIPS: BCM63xx: Provide a plat_post_dma_flush hook"
  MIPS: BMIPS: Delete unused Kconfig symbol
  MIPS: Export get_c0_perfcount_int()
  MIPS: show_stack: Fix stack trace with EVA
  MIPS: do_mcheck: Fix kernel code dump with EVA
  MIPS: SMP: Don't increment irq_count multiple times for call function IPIs
  MIPS: Partially disable RIXI support.
  MIPS: Handle page faults of executable but unreadable pages correctly.
  MIPS: Malta: Don't reinitialise RTC
  MIPS: unaligned: Fix build error on big endian R6 kernels
  MIPS: Fix sched_getaffinity with MT FPAFF enabled
  MIPS: Fix build with CONFIG_OF=y for non OF-enabled targets
  CPUFREQ: Loongson2: Fix broken build due to incorrect include.
2015-08-09 05:59:21 +03:00
Linus Torvalds
dd2384a75d Enabling a reduced config of HS38 (w/o div-rem, ll64...)
Adding software workaround for LLOCK/SCOND livelock
 Fallout of a recent pt_regs update
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJVxG0RAAoJEGnX8d3iisJe0icP/1LHI7JqO9Th52cob6aYfcC/
 Xi5LLQTTEz/68cvXHxYZo/wfaj+lZQL7YlugIHhnJfbbyXrnimoP+ljZY4VGnEfl
 F1g7dtltxVMZ7eB4/o9QsfT6ChwC5QvuHkKh+tTo8ZJ3YrU+jRwfQtPWrL/+cRnH
 U1U0dgmoU4gw5sP9+pPqcCS9qsw6ODjvFXP5QdbegTEqbLrkG77aqEaHD0WoUfKJ
 dgmluwYYRonMc1LTapNH7JeoNff5GPNDawItviuZ4tO/fPelNsujN5CYnro6thtp
 EKY1LHWicohVOJJ9PBYHb+4Th/taMYCYHWt9y/nkI860Jj9meq3PDM53iNvSJ71p
 9pU8Mung7jcN6HLCpdUB3jEQKAkX/a2hOqvWqqYuSbzBUq29HdsJUFMleCnQO4Mj
 E4eJfe0keSMKk3WOS3tlwp2KDHN1JKMfYaqBJtbYSnlhI/cHAsF9yLOjLV5adc+b
 vHglS5x7H4YxqPbiUPlDZv2Q7OD4P32wRpCTxOYynCtWZalSQ++2e2enVL3oCy7l
 e0fT04dYndhy7vfI+YCpu4dRUBRin65klrP11damhlJgF3u7RudoJC8s71SMJqs4
 +TQsuhCz5oQZty8+mPbSilCa3FZNSSlWe4tHk/F5xQ/BASXtq5m8lb9Z1SllUbKu
 +d0xcKVIDFaDTU9keHm6
 =gFBX
 -----END PGP SIGNATURE-----

Merge tag 'arc-v4.2-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc

Pull ARC fixes from Vineet Gupta:
 "Here's a late pull request for accumulated ARC fixes which came out of
  extended testing of the new ARCv2 port with LTP etc.  llock/scond
  livelock workaround has been reviewed by PeterZ.  The changes look a
  lot but I've crafted them into finer grained patches for better
  tracking later.

  I have some more fixes (ARC Futex backend) ready to go but those will
  have to wait for tglx to return from vacation.

  Summary:
   - Enable a reduced config of HS38 (w/o div-rem, ll64...)
   - Add software workaround for LLOCK/SCOND livelock
   - Fallout of a recent pt_regs update"

* tag 'arc-v4.2-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
  ARCv2: spinlock/rwlock/atomics: reduce 1 instruction in exponential backoff
  ARC: Make pt_regs regs unsigned
  ARCv2: spinlock/rwlock: Reset retry delay when starting a new spin-wait cycle
  ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with exponential backoff
  ARC: LLOCK/SCOND based rwlock
  ARC: LLOCK/SCOND based spin_lock
  ARC: refactor atomic inline asm operands with symbolic names
  Revert "ARCv2: STAR 9000837815 workaround hardware exclusive transactions livelock"
  ARCv2: [axs103_smp] Reduce clk for Quad FPGA configs
  ARCv2: Fix the peripheral address space detection
  ARCv2: allow selection of page size for MMUv4
  ARCv2: lib: memset: Don't assume 64-bit load/stores
  ARCv2: lib: memcpy: Missing PREFETCHW
  ARCv2: add knob for DIV_REV in Kconfig
  ARC/time: Migrate to new 'set-state' interface
2015-08-08 04:38:00 +03:00
Linus Torvalds
d5a8ab400b USB fixes for 4.2-rc6
Here are some USB and PHY fixes for 4.2-rc6 that resolve some reported
 issues.
 
 All of these have been in the linux-next tree for a while, full details
 on the patches are in the shortlog below.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iEYEABECAAYFAlXFMe4ACgkQMUfUDdst+ykcxgCfbv91Efs37AUe7LDiIpaperpJ
 wIMAn3xwaOsw8qrpcg3YGOaSggMhuuMC
 =vwz8
 -----END PGP SIGNATURE-----

Merge tag 'usb-4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB fixes from Greg KH:
 "Here are some USB and PHY fixes for 4.2-rc6 that resolve some reported
  issues.

  All of these have been in the linux-next tree for a while, full
  details on the patches are in the shortlog below"

* tag 'usb-4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
  ARM: dts: dra7: Add syscon-pllreset syscon to SATA PHY
  drivers/usb: Delete XHCI command timer if necessary
  xhci: fix off by one error in TRB DMA address boundary check
  usb: udc: core: add device_del() call to error pathway
  phy: ti-pipe3: i783 workaround for SATA lockup after dpll unlock/relock
  phy-sun4i-usb: Add missing EXPORT_SYMBOL_GPL for sun4i_usb_phy_set_squelch_detect
  USB: sierra: add 1199:68AB device ID
  usb: gadget: f_printer: actually limit the number of instances
  usb: gadget: f_hid: actually limit the number of instances
  usb: gadget: f_uac2: fix calculation of uac2->p_interval
  usb: gadget: bdc: fix a driver crash on disconnect
  usb: chipidea: ehci_init_driver is intended to call one time
  USB: qcserial: Add support for Dell Wireless 5809e 4G Modem
  USB: qcserial/option: make AT URCs work for Sierra Wireless MC7305/MC7355
2015-08-08 04:27:51 +03:00
Vineet Gupta
1097163870 ARCv2: spinlock/rwlock/atomics: reduce 1 instruction in exponential backoff
The increment of delay counter was 2 instructions:
Arithmatic Shfit Left (ASL) + set to 1 on overflow

This can be done in 1 using ROtate Left (ROL)

Suggested-by: Nigel Topham <ntopham@synopsys.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-07 13:56:16 +05:30
Linus Torvalds
49d7c6559b Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc
Pull sparc fix from David Miller:
 "FPU register corruption bug fix"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
  sparc64: Fix userspace FPU register corruptions.
2015-08-07 05:28:24 +03:00
David S. Miller
44922150d8 sparc64: Fix userspace FPU register corruptions.
If we have a series of events from userpsace, with %fprs=FPRS_FEF,
like follows:

ETRAP
	ETRAP
		VIS_ENTRY(fprs=0x4)
		VIS_EXIT
		RTRAP (kernel FPU restore with fpu_saved=0x4)
	RTRAP

We will not restore the user registers that were clobbered by the FPU
using kernel code in the inner-most trap.

Traps allocate FPU save slots in the thread struct, and FPU using
sequences save the "dirty" FPU registers only.

This works at the initial trap level because all of the registers
get recorded into the top-level FPU save area, and we'll return
to userspace with the FPU disabled so that any FPU use by the user
will take an FPU disabled trap wherein we'll load the registers
back up properly.

But this is not how trap returns from kernel to kernel operate.

The simplest fix for this bug is to always save all FPU register state
for anything other than the top-most FPU save area.

Getting rid of the optimized inner-slot FPU saving code ends up
making VISEntryHalf degenerate into plain VISEntry.

Longer term we need to do something smarter to reinstate the partial
save optimizations.  Perhaps the fundament error is having trap entry
and exit allocate FPU save slots and restore register state.  Instead,
the VISEntry et al. calls should be doing that work.

This bug is about two decades old.

Reported-by: James Y Knight <jyknight@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 19:13:25 -07:00
Amanieu d'Antras
26135022f8 signal: fix information leak in copy_siginfo_to_user
This function may copy the si_addr_lsb, si_lower and si_upper fields to
user mode when they haven't been initialized, which can leak kernel
stack data to user mode.

Just checking the value of si_code is insufficient because the same
si_code value is shared between multiple signals.  This is solved by
checking the value of si_signo in addition to si_code.

Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-07 04:39:40 +03:00
Amanieu d'Antras
3c00cb5e68 signal: fix information leak in copy_siginfo_from_user32
This function can leak kernel stack data when the user siginfo_t has a
positive si_code value.  The top 16 bits of si_code descibe which fields
in the siginfo_t union are active, but they are treated inconsistently
between copy_siginfo_from_user32, copy_siginfo_to_user32 and
copy_siginfo_to_user.

copy_siginfo_from_user32 is called from rt_sigqueueinfo and
rt_tgsigqueueinfo in which the user has full control overthe top 16 bits
of si_code.

This fixes the following information leaks:
x86:   8 bytes leaked when sending a signal from a 32-bit process to
       itself. This leak grows to 16 bytes if the process uses x32.
       (si_code = __SI_CHLD)
x86:   100 bytes leaked when sending a signal from a 32-bit process to
       a 64-bit process. (si_code = -1)
sparc: 4 bytes leaked when sending a signal from a 32-bit process to a
       64-bit process. (si_code = any)

parsic and s390 have similar bugs, but they are not vulnerable because
rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code
to a different process.  These bugs are also fixed for consistency.

Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-07 04:39:40 +03:00
Greg Kroah-Hartman
0a1b6f6319 phy: for 4.2-rc6
*) Fix compiler error when sun4i usb phy driver is built as module
 *) Fix SATA Lockup issue in dra7 SoC
 
 Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJVwOpCAAoJEA5ceFyATYLZTikP/17trNhiyCOL8s0ok1pbBR+B
 9ilzq7dOuVPrQUdvJycpH6DdajsuUN0GzzftjkN2h96v2ck2yVb8RgnhPB2aGKJm
 Nb/UTPgX1GUaRNQ/iZaKvDtSUjxQm3JWIaf06hC8rvd0D0k4WOvMxYJpHbGliBOf
 EPps3LZtnZn89htJniOUSDByrigTvfeWpKzU2guafoIEURNf6wGEYcCxOlYw9ST6
 W8LnSA5kkc3beRaGkNSzoQfq3zFJKF+8mPDARoiX6hBg+3oe9RmodIBKpD2Vl08k
 Y0eXtXhICxxqlGqRL71gfRXWDqNY9aMhXS0A/dyNvX2tDnXXNQF/Rjsxs+EPR0Td
 tFrwU/IzySmlCj4hrCEvImZnk1whe/kvsoHSXY7HSSVlXVQ/Nv5DulfxC8A8Nwx6
 v6fj3+3jwJRJnQgs1/PwYGpYFDh4L7dJRWKULjiilXeyhT9UJlH1IknjK1lhNdxg
 lIEEgckNTnBoU0cnW/7FEdbqnt562DLiZYSmXwKHffPrY4nCaaOMKz+Y0WAs+StW
 A+tDe+myPElGmJ01N8MF3QRpXFSdH8/06OJl5W4KhBXKMZXikMq2yZ4SpVdTiYfp
 4Nyiv1lblqwslESl3ru6KbcjUGkamnTXKA5p6u5/D5l+ACXgYkKsNZFN4gRwXpKi
 J/VGHy/PTOBo0W1saBuS
 =j6wu
 -----END PGP SIGNATURE-----

Merge tag 'phy-for-4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy into usb-linus

Kishon writes:

phy: for 4.2-rc6

*) Fix compiler error when sun4i usb phy driver is built as module
*) Fix SATA Lockup issue in dra7 SoC

Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2015-08-05 10:12:23 -07:00
Linus Torvalds
4469942bbb Just two very small & simple patches.
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJVwhncAAoJEL/70l94x66Dy7IIAJXfraikJQ9ghhLhjrP+5f5H
 MNBL+e3jKGmGVgItrtOMcLlJJvPkFNBkFMmYRJtdawezu46eFBLnIoTp8ZcG6cvu
 5Gjs1PNfq1nP5IzWsYYbohlaf1xkij+Jm2JZ/fxuEGC6xM91WVGV7YENt87S7O16
 ZdfhhEFHTTe+Fg86QwDGZ2bOhTBwZEAaVFM6siCml/WiqYtecwzEn19OiP6XeVbO
 FczG7CUXumrPnEohYrAVrCtIIb5dGzUCstQGlo3bC7CJ/G6CjaBl4cSd6Y/BHkhD
 KV6M7VJxjJ84HAKy9PMhC2iPC7H7Vfjg1iq6czHWu/Tida0d6dBiVzLVKcz2jj4=
 =SYMM
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
 "Just two very small & simple patches"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: MTRR: Use default type for non-MTRR-covered gfn before WARN_ON
  KVM: s390: Fix hang VCPU hang/loop regression
2015-08-05 18:50:38 +03:00
Alex Williamson
fc1a8126bf KVM: MTRR: Use default type for non-MTRR-covered gfn before WARN_ON
The patch was munged on commit to re-order these tests resulting in
excessive warnings when trying to do device assignment.  Return to
original ordering: https://lkml.org/lkml/2015/7/15/769

Fixes: 3e5d2fdced ("KVM: MTRR: simplify kvm_mtrr_get_guest_memory_type")
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05 11:57:57 +02:00
David Daney
46011e6ea3 MIPS: Make set_pte() SMP safe.
On MIPS the GLOBAL bit of the PTE must have the same value in any
aligned pair of PTEs.  These pairs of PTEs are referred to as
"buddies".  In a SMP system is is possible for two CPUs to be calling
set_pte() on adjacent PTEs at the same time.  There is a race between
setting the PTE and a different CPU setting the GLOBAL bit in its
buddy PTE.

This race can be observed when multiple CPUs are executing
vmap()/vfree() at the same time.

Make setting the buddy PTE's GLOBAL bit an atomic operation to close
the race condition.

The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not*
handled.

Signed-off-by: David Daney <david.daney@cavium.com>
Cc: <stable@vger.kernel.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10835/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-05 11:11:10 +02:00
Vineet Gupta
87ce62802f ARC: Make pt_regs regs unsigned
KGDB fails to build after f51e2f1911 ("ARC: make sure instruction_pointer()
returns unsigned value")

The hack to force one specific reg to unsigned backfired. There's no
reason to keep the regs signed after all.

|  CC      arch/arc/kernel/kgdb.o
|../arch/arc/kernel/kgdb.c: In function 'kgdb_trap':
| ../arch/arc/kernel/kgdb.c:180:29: error: lvalue required as left operand of assignment
|   instruction_pointer(regs) -= BREAK_INSTR_SIZE;

Reported-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Fixes: f51e2f1911 ("ARC: make sure instruction_pointer() returns unsigned value")
Cc: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-05 11:48:21 +05:30
Roger Quadros
257d5d9a9f ARM: dts: dra7: Add syscon-pllreset syscon to SATA PHY
This register is required to be passed to the SATA PHY driver
to workaround errata i783 (SATA Lockup After SATA DPLL Unlock/Relock).

Signed-off-by: Roger Quadros <rogerq@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2015-08-04 21:11:50 +05:30
Vineet Gupta
b89aa12c17 ARCv2: spinlock/rwlock: Reset retry delay when starting a new spin-wait cycle
The previous commit for delayed retry of SCOND needs some fine tuning
for spin locks.

The backoff from delayed retry in conjunction with spin looping of lock
itself can potentially cause the delay counter to reach high values.
So to provide fairness to any lock operation, after a lock "seems"
available (i.e. just before first SCOND try0, reset the delay counter
back to starting value of 1

Essentially reset delay to 1 for a new spin-wait-loop-acquire cycle.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:35 +05:30
Vineet Gupta
e78fdfef84 ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with exponential backoff
This is to workaround the llock/scond livelock

HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping
coherency transactions in the SCU. The exclusive line state keeps rotating
among contenting cores leading to a never ending cycle. So break the cycle
by deferring the retry of failed exclusive access (SCOND). The actual delay
needed is function of number of contending cores as well as the unrelated
coherency traffic from other cores. To keep the code simple, start off with
small delay of 1 which would suffice most cases and in case of contention
double the delay. Eventually the delay is sufficient such that the coherency
pipeline is drained, thus a subsequent exclusive access would succeed.

Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.com
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:34 +05:30
Vineet Gupta
69cbe630f5 ARC: LLOCK/SCOND based rwlock
With LLOCK/SCOND, the rwlock counter can be atomically updated w/o need
for a guarding spin lock.

This in turn elides the EXchange instruction based spinning which causes
the cacheline transition to exclusive state and concurrent spinning
across cores would cause the line to keep bouncing around.
LLOCK/SCOND based implementation is superior as spinning on LLOCK keeps
the cacheline in shared state.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:33 +05:30
Vineet Gupta
ae7eae9e03 ARC: LLOCK/SCOND based spin_lock
Current spin_lock uses EXchange instruction to implement the atomic test
and set of lock location (reads orig value and ST 1). This however forces
the cacheline into exclusive state (because of the ST) and concurrent
loops in multiple cores will bounce the line around between cores.

Instead, use LLOCK/SCOND to implement the atomic test and set which is
better as line is in shared state while lock is spinning on LLOCK

The real motivation of this change however is to make way for future
changes in atomics to implement delayed retry (with backoff).
Initial experiment with delayed retry in atomics combined with orig
EX based spinlock was a total disaster (broke even LMBench) as
struct sock has a cache line sharing an atomic_t and spinlock. The
tight spinning on lock, caused the atomic retry to keep backing off
such that it would never finish.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:33 +05:30
Vineet Gupta
8ac0665fb6 ARC: refactor atomic inline asm operands with symbolic names
This reduces the diff in forth-coming patches and also helps understand
better the incremental changes to inline asm.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:32 +05:30
Vineet Gupta
f5959cb0c3 Revert "ARCv2: STAR 9000837815 workaround hardware exclusive transactions livelock"
Extended testing of quad core configuration revealed that this fix was
insufficient. Specifically LTP open posix shm_op/23-1 would cause the
hardware livelock in llock/scond loop in update_cpu_load_active()

So remove this and make way for a proper workaround

This reverts commit a5c8b52abe.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:31 +05:30
Vineet Gupta
6de7abfbad ARCv2: [axs103_smp] Reduce clk for Quad FPGA configs
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:30 +05:30
Vineet Gupta
e13c42ecbe ARCv2: Fix the peripheral address space detection
With HS 2.1 release, the peripheral space register no longer contains
the uncached space specifics, causing the kernel to panic early on.
So read the newer NON VOLATILE AUX register to get that info.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-03 19:34:07 +05:30
James Cowgill
a4504755e7 MIPS: Replace add and sub instructions in relocate_kernel.S with addiu
Fixes the assembler errors generated when compiling a MIPS R6 kernel with
CONFIG_KEXEC on, by replacing the offending add and sub instructions with
addiu instructions.

Build errors:
arch/mips/kernel/relocate_kernel.S: Assembler messages:
arch/mips/kernel/relocate_kernel.S:27: Error: invalid operands `dadd $16,$16,8'
arch/mips/kernel/relocate_kernel.S:64: Error: invalid operands `dadd $20,$20,8'
arch/mips/kernel/relocate_kernel.S:65: Error: invalid operands `dadd $18,$18,8'
arch/mips/kernel/relocate_kernel.S:66: Error: invalid operands `dsub $22,$22,1'
scripts/Makefile.build:294: recipe for target 'arch/mips/kernel/relocate_kernel.o' failed

Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Cc: <stable@vger.kernel.org> # 4.0+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10558/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 15:26:30 +02:00
James Hogan
3aff47c062 MIPS: Flush RPS on kernel entry with EVA
When EVA is enabled, flush the Return Prediction Stack (RPS) present on
some MIPS cores on entry to the kernel from user mode.

This is important specifically for interAptiv with EVA enabled,
otherwise kernel mode RPS mispredicts may trigger speculative fetches of
user return addresses, which may be sensitive in the kernel address
space due to EVA's overlapping user/kernel address spaces.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15.x-
Patchwork: https://patchwork.linux-mips.org/patch/10812/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 10:29:11 +02:00
Florian Fainelli
247bfb65d7 Revert "MIPS: BCM63xx: Provide a plat_post_dma_flush hook"
This reverts commit 3cf2954341 ("MIPS:
BCM63xx: Provide a plat_post_dma_flush hook") since this commit was
found to prevent BCM6358 (early BMIPS4350 cores) and some BCM6368
(BMIPS4380 cores) from booting reliably.

Alvaro was able to track this down to an issue specifically located to
devices that use the second thread (TP1) when booting. Since BCM63xx did
not have a need for plat_post_dma_flush() hook before, let's just keep
things the way they were.

Reported-by: Álvaro Fernández Rojas <noltari@gmail.com>
Reported-by: Jonas Gorski <jogo@openwrt.org>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Nicolas Schichan <nschichan@freebox.fr>
Cc: linux-mips@linux-mips.org
Cc: blogic@openwrt.org
Cc: noltari@gmail.com
Cc: jogo@openwrt.org
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10804/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 10:19:22 +02:00
Kevin Cernekee
3592bb08fb MIPS: BMIPS: Delete unused Kconfig symbol
This was left over from an earlier iteration of the BMIPS irqchip changes.
It doesn't actually have an effect, so let's nuke it.

Reported-by: Valentin Rothberg <valentinrothberg@gmail.com>
Signed-off-by: Kevin Cernekee <cernekee@chromium.org>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org # v4.1+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9910/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 10:11:14 +02:00
Felix Fietkau
0cb0985f57 MIPS: Export get_c0_perfcount_int()
get_c0_perfcount_int is tested from oprofile code. If oprofile is
compiled as module, get_c0_perfcount_int needs to be exported, otherwise
it cannot be resolved.

Fixes: a669efc4a3 ("MIPS: Add hook to get C0 performance counter interrupt")
Cc: stable@vger.kernel.org # v3.19+
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Cc: linux-mips@linux-mips.org
Cc: abrestic@chromium.org
Patchwork: https://patchwork.linux-mips.org/patch/10763/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:18 +02:00
James Hogan
1e77863a51 MIPS: show_stack: Fix stack trace with EVA
The show_stack() function deals exclusively with kernel contexts, but if
it gets called in user context with EVA enabled, show_stacktrace() will
attempt to access the stack using EVA accesses, which will either read
other user mapped data, or more likely cause an exception which will be
handled by __get_user().

This is easily reproduced using SysRq t to show all task states, which
results in the following stack dump output:

 Stack : (Bad stack address)

Fix by setting the current user access mode to kernel around the call to
show_stacktrace(). This causes __get_user() to use normal loads to read
the kernel stack.

Now we get the correct output, like this:

 Stack : 00000000 80168960 00000000 004a0000 00000000 00000000 8060016c 1f3abd0c
           1f172cd8 8056f09c 7ff1e450 8014fc3c 00000001 806dd0b0 0000001d 00000002
           1f17c6a0 1f17c804 1f17c6a0 8066f6e0 00000000 0000000a 00000000 00000000
           00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
           00000000 00000000 00000000 00000000 00000000 0110e800 1f3abd6c 1f17c6a0
           ...

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10778/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:17 +02:00
James Hogan
55c723e181 MIPS: do_mcheck: Fix kernel code dump with EVA
If a machine check exception is raised in kernel mode, user context,
with EVA enabled, then the do_mcheck handler will attempt to read the
code around the EPC using EVA load instructions, i.e. as if the reads
were from user mode. This will either read random user data if the
process has anything mapped at the same address, or it will cause an
exception which is handled by __get_user, resulting in this output:

 Code: (Bad address in epc)

Fix by setting the current user access mode to kernel if the saved
register context indicates the exception was taken in kernel mode. This
causes __get_user to use normal loads to read the kernel code.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10777/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:14 +02:00
Alex Smith
4ace6139bf MIPS: SMP: Don't increment irq_count multiple times for call function IPIs
The majority of SMP platforms handle their IPIs through do_IRQ()
which calls irq_{enter/exit}(). When a call function IPI is received,
smp_call_function_interrupt() is called which also calls
irq_{enter,exit}(), meaning irq_count is raised twice.

When tick broadcasting is used (which is implemented via a call
function IPI), this incorrectly causes all CPU idle time on the core
receiving broadcast ticks to be accounted as time spent servicing
IRQs, as account_process_tick() will account as such if irq_count is
greater than 1. This results in 100% CPU usage being reported on a
core which receives its ticks via broadcast.

This patch removes the SMP smp_call_function_interrupt() wrapper which
calls irq_{enter,exit}(). Platforms which handle their IPIs through
do_IRQ() now call generic_smp_call_function_interrupt() directly to
avoid incrementing irq_count a second time. Platforms which don't
(loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt()
wrapped in irq_{enter,exit}().

Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10770/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:12 +02:00
Ralf Baechle
55fdcb2d56 MIPS: Partially disable RIXI support.
Execution of break instruction, trap instructions, emulation of unaligned
loads or floating point instructions - anything that tries to read the
instruction's opcode from userspace - needs read access to a page.

RIXI (Read Inhibit / Execute Inhibit) support however allows the creation of
pags that are executable but not readable.  On such a mapping the attempted
load of the opcode by the kernel is going to cause an endless loop of
page faults.

The quick workaround for this is to disable the combinations that the kernel
currently isn't able to handle which are executable mappings.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:11 +02:00
Ralf Baechle
e070dab735 MIPS: Handle page faults of executable but unreadable pages correctly.
Without this we end taking execeptions in an endless loop hanging the
thread.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:08 +02:00
James Hogan
106eccb4d2 MIPS: Malta: Don't reinitialise RTC
On Malta, since commit a87ea88d8f ("MIPS: Malta: initialise the RTC at
boot"), the RTC is reinitialised and forced into binary coded decimal
(BCD) mode during init, even if the bootloader has already initialised
it, and may even have already put it into binary mode (as YAMON does).
This corrupts the current time, can result in the RTC seconds being an
invalid BCD (e.g. 0x1a..0x1f) for up to 6 seconds, as well as confusing
YAMON for a while after reset, enough for it to report timeouts when
attempting to load from TFTP (it actually uses the RTC in that code).

Therefore only initialise the RTC to the extent that is necessary so
that Linux avoids interfering with the bootloader setup, while also
allowing it to estimate the CPU frequency without hanging, without a
bootloader necessarily having done anything with the RTC (for example
when the kernel is loaded via EJTAG).

The divider control is configured for a 32KHZ reference clock if
necessary, and the SET bit of the RTC_CONTROL register is cleared if
necessary without changing any other bits (this bit will be set when
coming out of reset if the battery has been disconnected).

Fixes: a87ea88d8f ("MIPS: Malta: initialise the RTC at boot")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.14+
Patchwork: https://patchwork.linux-mips.org/patch/10739/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:07 +02:00
James Cowgill
531a6d599f MIPS: unaligned: Fix build error on big endian R6 kernels
Commit eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel
unaligned accesses") renamed the Load* and Store* defines in unaligned.c
to _Load* and _Store* as part of its fix. One define was missed out which
causes big endian R6 kernels to fail to build.

arch/mips/kernel/unaligned.c:880:35:
error: implicit declaration of function '_StoreDW'
 #define StoreDW(addr, value, res) _StoreDW(addr, value, res)
                                   ^

Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Fixes: eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel unaligned accesses")
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 4.0+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10575/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:05 +02:00
Felix Fietkau
1d62d73755 MIPS: Fix sched_getaffinity with MT FPAFF enabled
p->thread.user_cpus_allowed is zero-initialized and is only filled on
the first sched_setaffinity call.

To avoid adding overhead in the task initialization codepath, simply OR
the returned mask in sched_getaffinity with p->cpus_allowed.

Cc: stable@vger.kernel.org
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10740/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:02 +02:00