drm-misc-next for 5.8:

UAPI Changes:
 
   - drm: error out with EBUSY when device has existing master
   - drm: rework SET_MASTER and DROP_MASTER perm handling
 
 Cross-subsystem Changes:
 
   - fbdev: savage: fix -Wextra build warning
   - video: omap2: Use scnprintf() for avoiding potential buffer overflow
 
 Core Changes:
 
   - Remove drm_pci.h
   - drm_pci_{alloc/free)() are now legacy
   - Introduce managed DRM resourcesA
   - Allow drivers to subclass struct drm_framebuffer
   - Introduce struct drm_afbc_framebuffer and helpers
   - fbdev: remove return value from generic fbdev setup
   - Introduce simple-encoder helper
   - vram-helpers: set fence on plane
   - dp_mst: ACT timeout improvements
   - dp_mst: Remove drm_dp_mst_has_audio()
   - TTM: ttm_trace_dma_{map/unmap}() cleanups
   - dma-buf: add flag for PCIP2P support
   - EDID: Various improvements
   - Encoder: cleanup semantics of possible_clones and possible_crtcs
   - VBLANK documentation updates
   - Writeback documentation updates
 
 Driver Changes:
 
   - Convert several drivers to i2c_new_client_device()
   - Drop explicit drm_mode_config_cleanup() calls from drivers
   - Auto-release device structures with drmm_add_final_kfree()
   - Init bfdev console after registering DRM device
   - Make various .debugfs functions return 0 unconditionally; ignore errors
   - video: Use scnprintf() to avoid buffer overflows
   - Convert drivers to simple encoders
 
   - drm/amdgpu: note that we can handle peer2peer DMA-buf
   - drm/amdgpu: add support for exporting VRAM using DMA-buf v3
   - drm/kirin: Revert change to register connectors
   - drm/lima: Add optional devfreq and cooling device support
   - drm/lima: Various improvements wrt. task handling
   - drm/panel: nt39016: Support multiple modes and 50Hz
   - drm/panel: Support Leadtek LTK050H3146W
   - drm/rockchip: Add support for afbc
   - drm/virtio: Various cleanups
   - drm/hisilicon/hibmc: Enforce 128-byte stride alignment
   - drm/qxl: Fix notify port address of cursor ring buffer
   - drm/sun4i: Improvements to format handling
   - drm/bridge: dw-hdmi: Various improvements
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAl6VfAAACgkQaA3BHVML
 eiNjBwgAtzRaqrKX3c4aL4NCBmfWzqxvKN0fVcx8tHtjhmrPTLITsHCM+wfcD2qC
 lkr/RMYJT02pNPGnX3jamQk0q/2GKGagChVZgORRsdYOOf5IqGIjvllhkg+U+7YV
 X0pHAfvGk2VyriHYj3s/cnwi9OwZ2UFjdS+f/u2Qp9jQYG/k8u9CCSnzgratY99I
 bI4jZi6JIoRkwuBpBEc9NbrduenKhcYNgPLDiYXY2TFmVz89NwITPnLyf5FWG5zd
 HsQ+dfIS9eoIxL3DTRgBZrPMvrqgiUjztB7cM4bdE0ttwTS7MW6M50/iV553qb9k
 DZ1+/pWFFyZLOPUYc3EK/QYdu8R3QA==
 =MQkd
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2020-04-14' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.8:

UAPI Changes:

  - drm: error out with EBUSY when device has existing master
  - drm: rework SET_MASTER and DROP_MASTER perm handling

Cross-subsystem Changes:

  - mm: export two symbols from slub/slob
  - fbdev: savage: fix -Wextra build warning
  - video: omap2: Use scnprintf() for avoiding potential buffer overflow

Core Changes:

  - Remove drm_pci.h
  - drm_pci_{alloc/free)() are now legacy
  - Introduce managed DRM resourcesA
  - Allow drivers to subclass struct drm_framebuffer
  - Introduce struct drm_afbc_framebuffer and helpers
  - fbdev: remove return value from generic fbdev setup
  - Introduce simple-encoder helper
  - vram-helpers: set fence on plane
  - dp_mst: ACT timeout improvements
  - dp_mst: Remove drm_dp_mst_has_audio()
  - TTM: ttm_trace_dma_{map/unmap}() cleanups
  - dma-buf: add flag for PCIP2P support
  - EDID: Various improvements
  - Encoder: cleanup semantics of possible_clones and possible_crtcs
  - VBLANK documentation updates
  - Writeback documentation updates

Driver Changes:

  - Convert several drivers to i2c_new_client_device()
  - Drop explicit drm_mode_config_cleanup() calls from drivers
  - Auto-release device structures with drmm_add_final_kfree()
  - Init bfdev console after registering DRM device
  - Make various .debugfs functions return 0 unconditionally; ignore errors
  - video: Use scnprintf() to avoid buffer overflows
  - Convert drivers to simple encoders

  - drm/amdgpu: note that we can handle peer2peer DMA-buf
  - drm/amdgpu: add support for exporting VRAM using DMA-buf v3
  - drm/kirin: Revert change to register connectors
  - drm/lima: Add optional devfreq and cooling device support
  - drm/lima: Various improvements wrt. task handling
  - drm/panel: nt39016: Support multiple modes and 50Hz
  - drm/panel: Support Leadtek LTK050H3146W
  - drm/rockchip: Add support for afbc
  - drm/virtio: Various cleanups
  - drm/hisilicon/hibmc: Enforce 128-byte stride alignment
  - drm/qxl: Fix notify port address of cursor ring buffer
  - drm/sun4i: Improvements to format handling
  - drm/bridge: dw-hdmi: Various improvements

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20200414090738.GA16827@linux-uq9g
This commit is contained in:
Dave Airlie 2020-04-22 10:40:34 +10:00
commit 1aa63ddf72
319 changed files with 7064 additions and 2689 deletions

View File

@ -0,0 +1,226 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/bridge/nwl-dsi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Northwest Logic MIPI-DSI controller on i.MX SoCs
maintainers:
- Guido Gúnther <agx@sigxcpu.org>
- Robert Chiras <robert.chiras@nxp.com>
description: |
NWL MIPI-DSI host controller found on i.MX8 platforms. This is a dsi bridge for
the SOCs NWL MIPI-DSI host controller.
properties:
compatible:
const: fsl,imx8mq-nwl-dsi
reg:
maxItems: 1
interrupts:
maxItems: 1
'#address-cells':
const: 1
'#size-cells':
const: 0
clocks:
items:
- description: DSI core clock
- description: RX_ESC clock (used in escape mode)
- description: TX_ESC clock (used in escape mode)
- description: PHY_REF clock
- description: LCDIF clock
clock-names:
items:
- const: core
- const: rx_esc
- const: tx_esc
- const: phy_ref
- const: lcdif
mux-controls:
description:
mux controller node to use for operating the input mux
phys:
maxItems: 1
description:
A phandle to the phy module representing the DPHY
phy-names:
items:
- const: dphy
power-domains:
maxItems: 1
resets:
items:
- description: dsi byte reset line
- description: dsi dpi reset line
- description: dsi esc reset line
- description: dsi pclk reset line
reset-names:
items:
- const: byte
- const: dpi
- const: esc
- const: pclk
ports:
type: object
description:
A node containing DSI input & output port nodes with endpoint
definitions as documented in
Documentation/devicetree/bindings/graph.txt.
properties:
port@0:
type: object
description:
Input port node to receive pixel data from the
display controller. Exactly one endpoint must be
specified.
properties:
'#address-cells':
const: 1
'#size-cells':
const: 0
endpoint@0:
description: sub-node describing the input from LCDIF
type: object
endpoint@1:
description: sub-node describing the input from DCSS
type: object
reg:
const: 0
required:
- '#address-cells'
- '#size-cells'
- reg
oneOf:
- required:
- endpoint@0
- required:
- endpoint@1
additionalProperties: false
port@1:
type: object
description:
DSI output port node to the panel or the next bridge
in the chain
'#address-cells':
const: 1
'#size-cells':
const: 0
required:
- '#address-cells'
- '#size-cells'
- port@0
- port@1
additionalProperties: false
patternProperties:
"^panel@[0-9]+$":
type: object
required:
- '#address-cells'
- '#size-cells'
- clock-names
- clocks
- compatible
- interrupts
- mux-controls
- phy-names
- phys
- ports
- reg
- reset-names
- resets
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/imx8mq-clock.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/imx8mq-reset.h>
mipi_dsi: mipi_dsi@30a00000 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "fsl,imx8mq-nwl-dsi";
reg = <0x30A00000 0x300>;
clocks = <&clk IMX8MQ_CLK_DSI_CORE>,
<&clk IMX8MQ_CLK_DSI_AHB>,
<&clk IMX8MQ_CLK_DSI_IPG_DIV>,
<&clk IMX8MQ_CLK_DSI_PHY_REF>,
<&clk IMX8MQ_CLK_LCDIF_PIXEL>;
clock-names = "core", "rx_esc", "tx_esc", "phy_ref", "lcdif";
interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
mux-controls = <&mux 0>;
power-domains = <&pgc_mipi>;
resets = <&src IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N>,
<&src IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N>,
<&src IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N>,
<&src IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N>;
reset-names = "byte", "dpi", "esc", "pclk";
phys = <&dphy>;
phy-names = "dphy";
panel@0 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "rocktech,jh057n00900";
reg = <0>;
port@0 {
reg = <0>;
panel_in: endpoint {
remote-endpoint = <&mipi_dsi_out>;
};
};
};
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
#size-cells = <0>;
#address-cells = <1>;
reg = <0>;
mipi_dsi_in: endpoint@0 {
reg = <0>;
remote-endpoint = <&lcdif_mipi_dsi>;
};
};
port@1 {
reg = <1>;
mipi_dsi_out: endpoint {
remote-endpoint = <&panel_in>;
};
};
};
};

View File

@ -24,6 +24,8 @@ properties:
- boe,tv101wum-n53
# AUO B101UAN08.3 10.1" WUXGA TFT LCD panel
- auo,b101uan08.3
# BOE TV105WUM-NW0 10.5" WUXGA TFT LCD panel
- boe,tv105wum-nw0
reg:
description: the virtual channel number of a DSI peripheral

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/display/panel/display-timings.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: display timing bindings
title: display timings bindings
maintainers:
- Thierry Reding <thierry.reding@gmail.com>
@ -14,7 +14,7 @@ maintainers:
description: |
A display panel may be able to handle several display timings,
with different resolutions.
The display-timings node makes it possible to specify the timing
The display-timings node makes it possible to specify the timings
and to specify the timing that is native for the display.
properties:
@ -25,8 +25,8 @@ properties:
$ref: /schemas/types.yaml#/definitions/phandle
description: |
The default display timing is the one specified as native-mode.
If no native-mode is specified then the first node is assumed the
native mode.
If no native-mode is specified then the first node is assumed
to be the native mode.
patternProperties:
"^timing":

View File

@ -1,20 +0,0 @@
Feiyang FY07024DI26A30-D 7" MIPI-DSI LCD Panel
Required properties:
- compatible: must be "feiyang,fy07024di26a30d"
- reg: DSI virtual channel used by that screen
- avdd-supply: analog regulator dc1 switch
- dvdd-supply: 3v3 digital regulator
- reset-gpios: a GPIO phandle for the reset pin
Optional properties:
- backlight: phandle for the backlight control.
panel@0 {
compatible = "feiyang,fy07024di26a30d";
reg = <0>;
avdd-supply = <&reg_dc1sw>;
dvdd-supply = <&reg_dldo2>;
reset-gpios = <&pio 3 24 GPIO_ACTIVE_HIGH>; /* LCD-RST: PD24 */
backlight = <&backlight>;
};

View File

@ -0,0 +1,58 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/feiyang,fy07024di26a30d.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Feiyang FY07024DI26A30-D 7" MIPI-DSI LCD Panel
maintainers:
- Jagan Teki <jagan@amarulasolutions.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: feiyang,fy07024di26a30d
reg:
description: DSI virtual channel used by that screen
maxItems: 1
avdd-supply:
description: analog regulator dc1 switch
dvdd-supply:
description: 3v3 digital regulator
reset-gpios: true
backlight: true
required:
- compatible
- reg
- avdd-supply
- dvdd-supply
- reset-gpios
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "feiyang,fy07024di26a30d";
reg = <0>;
avdd-supply = <&reg_dc1sw>;
dvdd-supply = <&reg_dldo2>;
reset-gpios = <&pio 3 24 GPIO_ACTIVE_HIGH>; /* LCD-RST: PD24 */
backlight = <&backlight>;
};
};

View File

@ -0,0 +1,51 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/leadtek,ltk050h3146w.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Leadtek LTK050H3146W 5.0in 720x1280 DSI panel
maintainers:
- Heiko Stuebner <heiko.stuebner@theobroma-systems.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
enum:
- leadtek,ltk050h3146w
- leadtek,ltk050h3146w-a2
reg: true
backlight: true
reset-gpios: true
iovcc-supply:
description: regulator that supplies the iovcc voltage
vci-supply:
description: regulator that supplies the vci voltage
required:
- compatible
- reg
- backlight
- iovcc-supply
- vci-supply
additionalProperties: false
examples:
- |
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "leadtek,ltk050h3146w";
reg = <0>;
backlight = <&backlight>;
iovcc-supply = <&vcc_1v8>;
vci-supply = <&vcc3v3_lcd>;
};
};
...

View File

@ -37,7 +37,6 @@ examples:
dsi {
#address-cells = <1>;
#size-cells = <0>;
reg = <0xff450000 0x1000>;
panel@0 {
compatible = "leadtek,ltk500hd1829";

View File

@ -63,9 +63,9 @@ properties:
display-timings:
description:
Some display panels supports several resolutions with different timing.
Some display panels support several resolutions with different timings.
The display-timings bindings supports specifying several timings and
optional specify which is the native mode.
optionally specifying which is the native mode.
allOf:
- $ref: display-timings.yaml#

View File

@ -227,6 +227,8 @@ properties:
- sharp,ls020b1dd01d
# Shelly SCA07010-BFN-LNN 7.0" WVGA TFT LCD panel
- shelly,sca07010-bfn-lnn
# Starry KR070PE2T 7" WVGA TFT LCD panel
- starry,kr070pe2t
# Starry 12.2" (1920x1200 pixels) TFT LCD panel
- starry,kr122ea0sra
# Tianma Micro-electronics TM070JDHG30 7.0" WXGA TFT LCD panel

View File

@ -1,30 +0,0 @@
Sitronix ST7701 based LCD panels
ST7701 designed for small and medium sizes of TFT LCD display, is
capable of supporting up to 480RGBX864 in resolution. It provides
several system interfaces like MIPI/RGB/SPI.
Techstar TS8550B is 480x854, 2-lane MIPI DSI LCD panel which has
inbuilt ST7701 chip.
Required properties:
- compatible: must be "sitronix,st7701" and one of
* "techstar,ts8550b"
- reset-gpios: a GPIO phandle for the reset pin
Required properties for techstar,ts8550b:
- reg: DSI virtual channel used by that screen
- VCC-supply: analog regulator for MIPI circuit
- IOVCC-supply: I/O system regulator
Optional properties:
- backlight: phandle for the backlight control.
panel@0 {
compatible = "techstar,ts8550b", "sitronix,st7701";
reg = <0>;
VCC-supply = <&reg_dldo2>;
IOVCC-supply = <&reg_dldo2>;
reset-gpios = <&pio 3 24 GPIO_ACTIVE_HIGH>; /* LCD-RST: PD24 */
backlight = <&backlight>;
};

View File

@ -0,0 +1,69 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/sitronix,st7701.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Sitronix ST7701 based LCD panels
maintainers:
- Jagan Teki <jagan@amarulasolutions.com>
description: |
ST7701 designed for small and medium sizes of TFT LCD display, is
capable of supporting up to 480RGBX864 in resolution. It provides
several system interfaces like MIPI/RGB/SPI.
Techstar TS8550B is 480x854, 2-lane MIPI DSI LCD panel which has
inbuilt ST7701 chip.
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
items:
- enum:
- techstar,ts8550b
- const: sitronix,st7701
reg:
description: DSI virtual channel used by that screen
maxItems: 1
VCC-supply:
description: analog regulator for MIPI circuit
IOVCC-supply:
description: I/O system regulator
reset-gpios: true
backlight: true
required:
- compatible
- reg
- VCC-supply
- IOVCC-supply
- reset-gpios
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "techstar,ts8550b", "sitronix,st7701";
reg = <0>;
VCC-supply = <&reg_dldo2>;
IOVCC-supply = <&reg_dldo2>;
reset-gpios = <&pio 3 24 GPIO_ACTIVE_HIGH>; /* LCD-RST: PD24 */
backlight = <&backlight>;
};
};

View File

@ -0,0 +1,57 @@
# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/visionox,rm69299.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Visionox model RM69299 Panels Device Tree Bindings.
maintainers:
- Harigovindan P <harigovi@codeaurora.org>
description: |
This binding is for display panels using a Visionox RM692999 panel.
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: visionox,rm69299-1080p-display
vdda-supply:
description: |
Phandle of the regulator that provides the vdda supply voltage.
vdd3p3-supply:
description: |
Phandle of the regulator that provides the vdd3p3 supply voltage.
port: true
reset-gpios: true
additionalProperties: false
required:
- compatible
- vdda-supply
- vdd3p3-supply
- reset-gpios
- port
examples:
- |
panel {
compatible = "visionox,rm69299-1080p-display";
vdda-supply = <&src_pp1800_l8c>;
vdd3p3-supply = <&src_pp2800_l18a>;
reset-gpios = <&pm6150l_gpio 3 0>;
port {
panel0_in: endpoint {
remote-endpoint = <&dsi0_out>;
};
};
};
...

View File

@ -37,7 +37,6 @@ examples:
dsi {
#address-cells = <1>;
#size-cells = <0>;
reg = <0xff450000 0x1000>;
panel@0 {
compatible = "xinpeng,xpp055c272";

View File

@ -1,74 +0,0 @@
device-tree bindings for rockchip soc display controller (vop)
VOP (Visual Output Processor) is the Display Controller for the Rockchip
series of SoCs which transfers the image data from a video memory
buffer to an external LCD interface.
Required properties:
- compatible: value should be one of the following
"rockchip,rk3036-vop";
"rockchip,rk3126-vop";
"rockchip,px30-vop-lit";
"rockchip,px30-vop-big";
"rockchip,rk3066-vop";
"rockchip,rk3188-vop";
"rockchip,rk3288-vop";
"rockchip,rk3368-vop";
"rockchip,rk3366-vop";
"rockchip,rk3399-vop-big";
"rockchip,rk3399-vop-lit";
"rockchip,rk3228-vop";
"rockchip,rk3328-vop";
- reg: Must contain one entry corresponding to the base address and length
of the register space. Can optionally contain a second entry
corresponding to the CRTC gamma LUT address.
- interrupts: should contain a list of all VOP IP block interrupts in the
order: VSYNC, LCD_SYSTEM. The interrupt specifier
format depends on the interrupt controller used.
- clocks: must include clock specifiers corresponding to entries in the
clock-names property.
- clock-names: Must contain
aclk_vop: for ddr buffer transfer.
hclk_vop: for ahb bus to R/W the phy regs.
dclk_vop: pixel clock.
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- axi
- ahb
- dclk
- iommus: required a iommu node
- port: A port node with endpoint definitions as defined in
Documentation/devicetree/bindings/media/video-interfaces.txt.
Example:
SoC specific DT entry:
vopb: vopb@ff930000 {
compatible = "rockchip,rk3288-vop";
reg = <0x0 0xff930000 0x0 0x19c>, <0x0 0xff931000 0x0 0x1000>;
interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru ACLK_VOP0>, <&cru DCLK_VOP0>, <&cru HCLK_VOP0>;
clock-names = "aclk_vop", "dclk_vop", "hclk_vop";
resets = <&cru SRST_LCDC1_AXI>, <&cru SRST_LCDC1_AHB>, <&cru SRST_LCDC1_DCLK>;
reset-names = "axi", "ahb", "dclk";
iommus = <&vopb_mmu>;
vopb_out: port {
#address-cells = <1>;
#size-cells = <0>;
vopb_out_edp: endpoint@0 {
reg = <0>;
remote-endpoint=<&edp_in_vopb>;
};
vopb_out_hdmi: endpoint@1 {
reg = <1>;
remote-endpoint=<&hdmi_in_vopb>;
};
};
};

View File

@ -0,0 +1,134 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/rockchip/rockchip-vop.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip SoC display controller (VOP)
description:
VOP (Video Output Processor) is the display controller for the Rockchip
series of SoCs which transfers the image data from a video memory
buffer to an external LCD interface.
maintainers:
- Sandy Huang <hjc@rock-chips.com>
- Heiko Stuebner <heiko@sntech.de>
properties:
compatible:
enum:
- rockchip,px30-vop-big
- rockchip,px30-vop-lit
- rockchip,rk3036-vop
- rockchip,rk3066-vop
- rockchip,rk3126-vop
- rockchip,rk3188-vop
- rockchip,rk3228-vop
- rockchip,rk3288-vop
- rockchip,rk3328-vop
- rockchip,rk3366-vop
- rockchip,rk3368-vop
- rockchip,rk3399-vop-big
- rockchip,rk3399-vop-lit
reg:
minItems: 1
items:
- description:
Must contain one entry corresponding to the base address and length
of the register space.
- description:
Can optionally contain a second entry corresponding to
the CRTC gamma LUT address.
interrupts:
maxItems: 1
description:
The VOP interrupt is shared by several interrupt sources, such as
frame start (VSYNC), line flag and other status interrupts.
clocks:
items:
- description: Clock for ddr buffer transfer.
- description: Pixel clock.
- description: Clock for the ahb bus to R/W the phy regs.
clock-names:
items:
- const: aclk_vop
- const: dclk_vop
- const: hclk_vop
resets:
maxItems: 3
reset-names:
items:
- const: axi
- const: ahb
- const: dclk
port:
type: object
description:
A port node with endpoint definitions as defined in
Documentation/devicetree/bindings/media/video-interfaces.txt.
assigned-clocks:
maxItems: 2
assigned-clock-rates:
maxItems: 2
iommus:
maxItems: 1
power-domains:
maxItems: 1
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
- resets
- reset-names
- port
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/rk3288-cru.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/power/rk3288-power.h>
vopb: vopb@ff930000 {
compatible = "rockchip,rk3288-vop";
reg = <0x0 0xff930000 0x0 0x19c>,
<0x0 0xff931000 0x0 0x1000>;
interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru ACLK_VOP0>,
<&cru DCLK_VOP0>,
<&cru HCLK_VOP0>;
clock-names = "aclk_vop", "dclk_vop", "hclk_vop";
power-domains = <&power RK3288_PD_VIO>;
resets = <&cru SRST_LCDC1_AXI>,
<&cru SRST_LCDC1_AHB>,
<&cru SRST_LCDC1_DCLK>;
reset-names = "axi", "ahb", "dclk";
iommus = <&vopb_mmu>;
vopb_out: port {
#address-cells = <1>;
#size-cells = <0>;
vopb_out_edp: endpoint@0 {
reg = <0>;
remote-endpoint=<&edp_in_vopb>;
};
vopb_out_hdmi: endpoint@1 {
reg = <1>;
remote-endpoint=<&hdmi_in_vopb>;
};
};
};

View File

@ -132,6 +132,18 @@ be unmapped; on many devices, the ROM address decoder is shared with
other BARs, so leaving it mapped could cause undesired behaviour like
hangs or memory corruption.
Managed Resources
-----------------
.. kernel-doc:: drivers/gpu/drm/drm_managed.c
:doc: managed resources
.. kernel-doc:: drivers/gpu/drm/drm_managed.c
:export:
.. kernel-doc:: include/drm/drm_managed.h
:internal:
Bus-specific Device Registration and PCI Support
------------------------------------------------

View File

@ -3,7 +3,7 @@ Kernel Mode Setting (KMS)
=========================
Drivers must initialize the mode setting core by calling
drm_mode_config_init() on the DRM device. The function
drmm_mode_config_init() on the DRM device. The function
initializes the :c:type:`struct drm_device <drm_device>`
mode_config field and never fails. Once done, mode configuration must
be setup by initializing the following fields.
@ -397,6 +397,9 @@ Connector Functions Reference
Writeback Connectors
--------------------
.. kernel-doc:: include/drm/drm_writeback.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_writeback.c
:doc: overview

View File

@ -373,15 +373,6 @@ GEM CMA Helper Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
:export:
VRAM Helper Function Reference
==============================
.. kernel-doc:: drivers/gpu/drm/drm_vram_helper_common.c
:doc: overview
.. kernel-doc:: include/drm/drm_gem_vram_helper.h
:internal:
GEM VRAM Helper Functions Reference
-----------------------------------

View File

@ -5045,7 +5045,7 @@ F: drivers/dma-buf/
F: include/linux/*fence.h
F: include/linux/dma-buf*
F: include/linux/dma-resv.h
K: dma_(buf|fence|resv)
K: \bdma_(?:buf|fence|resv)\b
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
M: Vinod Koul <vkoul@kernel.org>
@ -5300,7 +5300,7 @@ F: drivers/gpu/drm/panel/panel-feixin-k101-im2ba02.c
DRM DRIVER FOR FEIYANG FY07024DI26A30-D MIPI-DSI LCD PANELS
M: Jagan Teki <jagan@amarulasolutions.com>
S: Maintained
F: Documentation/devicetree/bindings/display/panel/feiyang,fy07024di26a30d.txt
F: Documentation/devicetree/bindings/display/panel/feiyang,fy07024di26a30d.yaml
F: drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
DRM DRIVER FOR GRAIN MEDIA GM12U320 PROJECTORS
@ -5450,7 +5450,7 @@ F: drivers/gpu/drm/tiny/st7586.c
DRM DRIVER FOR SITRONIX ST7701 PANELS
M: Jagan Teki <jagan@amarulasolutions.com>
S: Maintained
F: Documentation/devicetree/bindings/display/panel/sitronix,st7701.txt
F: Documentation/devicetree/bindings/display/panel/sitronix,st7701.yaml
F: drivers/gpu/drm/panel/panel-sitronix-st7701.c
DRM DRIVER FOR SITRONIX ST7735R PANELS

View File

@ -9,6 +9,7 @@ obj-$(CONFIG_UDMABUF) += udmabuf.o
dmabuf_selftests-y := \
selftest.o \
st-dma-fence.o
st-dma-fence.o \
st-dma-fence-chain.o
obj-$(CONFIG_DMABUF_SELFTESTS) += dmabuf_selftests.o

View File

@ -690,6 +690,8 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
attach->dev = dev;
attach->dmabuf = dmabuf;
if (importer_ops)
attach->peer2peer = importer_ops->allow_peer2peer;
attach->importer_ops = importer_ops;
attach->importer_priv = importer_priv;

View File

@ -62,7 +62,8 @@ struct dma_fence *dma_fence_chain_walk(struct dma_fence *fence)
replacement = NULL;
}
tmp = cmpxchg((void **)&chain->prev, (void *)prev, (void *)replacement);
tmp = cmpxchg((struct dma_fence __force **)&chain->prev,
prev, replacement);
if (tmp == prev)
dma_fence_put(tmp);
else
@ -98,6 +99,12 @@ int dma_fence_chain_find_seqno(struct dma_fence **pfence, uint64_t seqno)
return -EINVAL;
dma_fence_chain_for_each(*pfence, &chain->base) {
if ((*pfence)->seqno < seqno) { /* already signaled */
dma_fence_put(*pfence);
*pfence = NULL;
break;
}
if ((*pfence)->context != chain->base.context ||
to_dma_fence_chain(*pfence)->prev_seqno < seqno)
break;
@ -221,6 +228,7 @@ EXPORT_SYMBOL(dma_fence_chain_ops);
* @chain: the chain node to initialize
* @prev: the previous fence
* @fence: the current fence
* @seqno: the sequence number (syncpt) of the fence within the chain
*
* Initialize a new chain node and either start a new chain or add the node to
* the existing chain of the previous fence.

View File

@ -11,3 +11,4 @@
*/
selftest(sanitycheck, __sanitycheck__) /* keep first (igt selfcheck) */
selftest(dma_fence, dma_fence)
selftest(dma_fence_chain, dma_fence_chain)

View File

@ -0,0 +1,715 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2019 Intel Corporation
*/
#include <linux/delay.h>
#include <linux/dma-fence.h>
#include <linux/dma-fence-chain.h>
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/mm.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/random.h>
#include "selftest.h"
#define CHAIN_SZ (4 << 10)
static struct kmem_cache *slab_fences;
static inline struct mock_fence {
struct dma_fence base;
spinlock_t lock;
} *to_mock_fence(struct dma_fence *f) {
return container_of(f, struct mock_fence, base);
}
static const char *mock_name(struct dma_fence *f)
{
return "mock";
}
static void mock_fence_release(struct dma_fence *f)
{
kmem_cache_free(slab_fences, to_mock_fence(f));
}
static const struct dma_fence_ops mock_ops = {
.get_driver_name = mock_name,
.get_timeline_name = mock_name,
.release = mock_fence_release,
};
static struct dma_fence *mock_fence(void)
{
struct mock_fence *f;
f = kmem_cache_alloc(slab_fences, GFP_KERNEL);
if (!f)
return NULL;
spin_lock_init(&f->lock);
dma_fence_init(&f->base, &mock_ops, &f->lock, 0, 0);
return &f->base;
}
static inline struct mock_chain {
struct dma_fence_chain base;
} *to_mock_chain(struct dma_fence *f) {
return container_of(f, struct mock_chain, base.base);
}
static struct dma_fence *mock_chain(struct dma_fence *prev,
struct dma_fence *fence,
u64 seqno)
{
struct mock_chain *f;
f = kmalloc(sizeof(*f), GFP_KERNEL);
if (!f)
return NULL;
dma_fence_chain_init(&f->base,
dma_fence_get(prev),
dma_fence_get(fence),
seqno);
return &f->base.base;
}
static int sanitycheck(void *arg)
{
struct dma_fence *f, *chain;
int err = 0;
f = mock_fence();
if (!f)
return -ENOMEM;
chain = mock_chain(NULL, f, 1);
if (!chain)
err = -ENOMEM;
dma_fence_signal(f);
dma_fence_put(f);
dma_fence_put(chain);
return err;
}
struct fence_chains {
unsigned int chain_length;
struct dma_fence **fences;
struct dma_fence **chains;
struct dma_fence *tail;
};
static uint64_t seqno_inc(unsigned int i)
{
return i + 1;
}
static int fence_chains_init(struct fence_chains *fc, unsigned int count,
uint64_t (*seqno_fn)(unsigned int))
{
unsigned int i;
int err = 0;
fc->chains = kvmalloc_array(count, sizeof(*fc->chains),
GFP_KERNEL | __GFP_ZERO);
if (!fc->chains)
return -ENOMEM;
fc->fences = kvmalloc_array(count, sizeof(*fc->fences),
GFP_KERNEL | __GFP_ZERO);
if (!fc->fences) {
err = -ENOMEM;
goto err_chains;
}
fc->tail = NULL;
for (i = 0; i < count; i++) {
fc->fences[i] = mock_fence();
if (!fc->fences[i]) {
err = -ENOMEM;
goto unwind;
}
fc->chains[i] = mock_chain(fc->tail,
fc->fences[i],
seqno_fn(i));
if (!fc->chains[i]) {
err = -ENOMEM;
goto unwind;
}
fc->tail = fc->chains[i];
}
fc->chain_length = i;
return 0;
unwind:
for (i = 0; i < count; i++) {
dma_fence_put(fc->fences[i]);
dma_fence_put(fc->chains[i]);
}
kvfree(fc->fences);
err_chains:
kvfree(fc->chains);
return err;
}
static void fence_chains_fini(struct fence_chains *fc)
{
unsigned int i;
for (i = 0; i < fc->chain_length; i++) {
dma_fence_signal(fc->fences[i]);
dma_fence_put(fc->fences[i]);
}
kvfree(fc->fences);
for (i = 0; i < fc->chain_length; i++)
dma_fence_put(fc->chains[i]);
kvfree(fc->chains);
}
static int find_seqno(void *arg)
{
struct fence_chains fc;
struct dma_fence *fence;
int err;
int i;
err = fence_chains_init(&fc, 64, seqno_inc);
if (err)
return err;
fence = dma_fence_get(fc.tail);
err = dma_fence_chain_find_seqno(&fence, 0);
dma_fence_put(fence);
if (err) {
pr_err("Reported %d for find_seqno(0)!\n", err);
goto err;
}
for (i = 0; i < fc.chain_length; i++) {
fence = dma_fence_get(fc.tail);
err = dma_fence_chain_find_seqno(&fence, i + 1);
dma_fence_put(fence);
if (err) {
pr_err("Reported %d for find_seqno(%d:%d)!\n",
err, fc.chain_length + 1, i + 1);
goto err;
}
if (fence != fc.chains[i]) {
pr_err("Incorrect fence reported by find_seqno(%d:%d)\n",
fc.chain_length + 1, i + 1);
err = -EINVAL;
goto err;
}
dma_fence_get(fence);
err = dma_fence_chain_find_seqno(&fence, i + 1);
dma_fence_put(fence);
if (err) {
pr_err("Error reported for finding self\n");
goto err;
}
if (fence != fc.chains[i]) {
pr_err("Incorrect fence reported by find self\n");
err = -EINVAL;
goto err;
}
dma_fence_get(fence);
err = dma_fence_chain_find_seqno(&fence, i + 2);
dma_fence_put(fence);
if (!err) {
pr_err("Error not reported for future fence: find_seqno(%d:%d)!\n",
i + 1, i + 2);
err = -EINVAL;
goto err;
}
dma_fence_get(fence);
err = dma_fence_chain_find_seqno(&fence, i);
dma_fence_put(fence);
if (err) {
pr_err("Error reported for previous fence!\n");
goto err;
}
if (i > 0 && fence != fc.chains[i - 1]) {
pr_err("Incorrect fence reported by find_seqno(%d:%d)\n",
i + 1, i);
err = -EINVAL;
goto err;
}
}
err:
fence_chains_fini(&fc);
return err;
}
static int find_signaled(void *arg)
{
struct fence_chains fc;
struct dma_fence *fence;
int err;
err = fence_chains_init(&fc, 2, seqno_inc);
if (err)
return err;
dma_fence_signal(fc.fences[0]);
fence = dma_fence_get(fc.tail);
err = dma_fence_chain_find_seqno(&fence, 1);
dma_fence_put(fence);
if (err) {
pr_err("Reported %d for find_seqno()!\n", err);
goto err;
}
if (fence && fence != fc.chains[0]) {
pr_err("Incorrect chain-fence.seqno:%lld reported for completed seqno:1\n",
fence->seqno);
dma_fence_get(fence);
err = dma_fence_chain_find_seqno(&fence, 1);
dma_fence_put(fence);
if (err)
pr_err("Reported %d for finding self!\n", err);
err = -EINVAL;
}
err:
fence_chains_fini(&fc);
return err;
}
static int find_out_of_order(void *arg)
{
struct fence_chains fc;
struct dma_fence *fence;
int err;
err = fence_chains_init(&fc, 3, seqno_inc);
if (err)
return err;
dma_fence_signal(fc.fences[1]);
fence = dma_fence_get(fc.tail);
err = dma_fence_chain_find_seqno(&fence, 2);
dma_fence_put(fence);
if (err) {
pr_err("Reported %d for find_seqno()!\n", err);
goto err;
}
if (fence && fence != fc.chains[1]) {
pr_err("Incorrect chain-fence.seqno:%lld reported for completed seqno:2\n",
fence->seqno);
dma_fence_get(fence);
err = dma_fence_chain_find_seqno(&fence, 2);
dma_fence_put(fence);
if (err)
pr_err("Reported %d for finding self!\n", err);
err = -EINVAL;
}
err:
fence_chains_fini(&fc);
return err;
}
static uint64_t seqno_inc2(unsigned int i)
{
return 2 * i + 2;
}
static int find_gap(void *arg)
{
struct fence_chains fc;
struct dma_fence *fence;
int err;
int i;
err = fence_chains_init(&fc, 64, seqno_inc2);
if (err)
return err;
for (i = 0; i < fc.chain_length; i++) {
fence = dma_fence_get(fc.tail);
err = dma_fence_chain_find_seqno(&fence, 2 * i + 1);
dma_fence_put(fence);
if (err) {
pr_err("Reported %d for find_seqno(%d:%d)!\n",
err, fc.chain_length + 1, 2 * i + 1);
goto err;
}
if (fence != fc.chains[i]) {
pr_err("Incorrect fence.seqno:%lld reported by find_seqno(%d:%d)\n",
fence->seqno,
fc.chain_length + 1,
2 * i + 1);
err = -EINVAL;
goto err;
}
dma_fence_get(fence);
err = dma_fence_chain_find_seqno(&fence, 2 * i + 2);
dma_fence_put(fence);
if (err) {
pr_err("Error reported for finding self\n");
goto err;
}
if (fence != fc.chains[i]) {
pr_err("Incorrect fence reported by find self\n");
err = -EINVAL;
goto err;
}
}
err:
fence_chains_fini(&fc);
return err;
}
struct find_race {
struct fence_chains fc;
atomic_t children;
};
static int __find_race(void *arg)
{
struct find_race *data = arg;
int err = 0;
while (!kthread_should_stop()) {
struct dma_fence *fence = dma_fence_get(data->fc.tail);
int seqno;
seqno = prandom_u32_max(data->fc.chain_length) + 1;
err = dma_fence_chain_find_seqno(&fence, seqno);
if (err) {
pr_err("Failed to find fence seqno:%d\n",
seqno);
dma_fence_put(fence);
break;
}
if (!fence)
goto signal;
err = dma_fence_chain_find_seqno(&fence, seqno);
if (err) {
pr_err("Reported an invalid fence for find-self:%d\n",
seqno);
dma_fence_put(fence);
break;
}
if (fence->seqno < seqno) {
pr_err("Reported an earlier fence.seqno:%lld for seqno:%d\n",
fence->seqno, seqno);
err = -EINVAL;
dma_fence_put(fence);
break;
}
dma_fence_put(fence);
signal:
seqno = prandom_u32_max(data->fc.chain_length - 1);
dma_fence_signal(data->fc.fences[seqno]);
cond_resched();
}
if (atomic_dec_and_test(&data->children))
wake_up_var(&data->children);
return err;
}
static int find_race(void *arg)
{
struct find_race data;
int ncpus = num_online_cpus();
struct task_struct **threads;
unsigned long count;
int err;
int i;
err = fence_chains_init(&data.fc, CHAIN_SZ, seqno_inc);
if (err)
return err;
threads = kmalloc_array(ncpus, sizeof(*threads), GFP_KERNEL);
if (!threads) {
err = -ENOMEM;
goto err;
}
atomic_set(&data.children, 0);
for (i = 0; i < ncpus; i++) {
threads[i] = kthread_run(__find_race, &data, "dmabuf/%d", i);
if (IS_ERR(threads[i])) {
ncpus = i;
break;
}
atomic_inc(&data.children);
get_task_struct(threads[i]);
}
wait_var_event_timeout(&data.children,
!atomic_read(&data.children),
5 * HZ);
for (i = 0; i < ncpus; i++) {
int ret;
ret = kthread_stop(threads[i]);
if (ret && !err)
err = ret;
put_task_struct(threads[i]);
}
kfree(threads);
count = 0;
for (i = 0; i < data.fc.chain_length; i++)
if (dma_fence_is_signaled(data.fc.fences[i]))
count++;
pr_info("Completed %lu cycles\n", count);
err:
fence_chains_fini(&data.fc);
return err;
}
static int signal_forward(void *arg)
{
struct fence_chains fc;
int err;
int i;
err = fence_chains_init(&fc, 64, seqno_inc);
if (err)
return err;
for (i = 0; i < fc.chain_length; i++) {
dma_fence_signal(fc.fences[i]);
if (!dma_fence_is_signaled(fc.chains[i])) {
pr_err("chain[%d] not signaled!\n", i);
err = -EINVAL;
goto err;
}
if (i + 1 < fc.chain_length &&
dma_fence_is_signaled(fc.chains[i + 1])) {
pr_err("chain[%d] is signaled!\n", i);
err = -EINVAL;
goto err;
}
}
err:
fence_chains_fini(&fc);
return err;
}
static int signal_backward(void *arg)
{
struct fence_chains fc;
int err;
int i;
err = fence_chains_init(&fc, 64, seqno_inc);
if (err)
return err;
for (i = fc.chain_length; i--; ) {
dma_fence_signal(fc.fences[i]);
if (i > 0 && dma_fence_is_signaled(fc.chains[i])) {
pr_err("chain[%d] is signaled!\n", i);
err = -EINVAL;
goto err;
}
}
for (i = 0; i < fc.chain_length; i++) {
if (!dma_fence_is_signaled(fc.chains[i])) {
pr_err("chain[%d] was not signaled!\n", i);
err = -EINVAL;
goto err;
}
}
err:
fence_chains_fini(&fc);
return err;
}
static int __wait_fence_chains(void *arg)
{
struct fence_chains *fc = arg;
if (dma_fence_wait(fc->tail, false))
return -EIO;
return 0;
}
static int wait_forward(void *arg)
{
struct fence_chains fc;
struct task_struct *tsk;
int err;
int i;
err = fence_chains_init(&fc, CHAIN_SZ, seqno_inc);
if (err)
return err;
tsk = kthread_run(__wait_fence_chains, &fc, "dmabuf/wait");
if (IS_ERR(tsk)) {
err = PTR_ERR(tsk);
goto err;
}
get_task_struct(tsk);
yield_to(tsk, true);
for (i = 0; i < fc.chain_length; i++)
dma_fence_signal(fc.fences[i]);
err = kthread_stop(tsk);
put_task_struct(tsk);
err:
fence_chains_fini(&fc);
return err;
}
static int wait_backward(void *arg)
{
struct fence_chains fc;
struct task_struct *tsk;
int err;
int i;
err = fence_chains_init(&fc, CHAIN_SZ, seqno_inc);
if (err)
return err;
tsk = kthread_run(__wait_fence_chains, &fc, "dmabuf/wait");
if (IS_ERR(tsk)) {
err = PTR_ERR(tsk);
goto err;
}
get_task_struct(tsk);
yield_to(tsk, true);
for (i = fc.chain_length; i--; )
dma_fence_signal(fc.fences[i]);
err = kthread_stop(tsk);
put_task_struct(tsk);
err:
fence_chains_fini(&fc);
return err;
}
static void randomise_fences(struct fence_chains *fc)
{
unsigned int count = fc->chain_length;
/* Fisher-Yates shuffle courtesy of Knuth */
while (--count) {
unsigned int swp;
swp = prandom_u32_max(count + 1);
if (swp == count)
continue;
swap(fc->fences[count], fc->fences[swp]);
}
}
static int wait_random(void *arg)
{
struct fence_chains fc;
struct task_struct *tsk;
int err;
int i;
err = fence_chains_init(&fc, CHAIN_SZ, seqno_inc);
if (err)
return err;
randomise_fences(&fc);
tsk = kthread_run(__wait_fence_chains, &fc, "dmabuf/wait");
if (IS_ERR(tsk)) {
err = PTR_ERR(tsk);
goto err;
}
get_task_struct(tsk);
yield_to(tsk, true);
for (i = 0; i < fc.chain_length; i++)
dma_fence_signal(fc.fences[i]);
err = kthread_stop(tsk);
put_task_struct(tsk);
err:
fence_chains_fini(&fc);
return err;
}
int dma_fence_chain(void)
{
static const struct subtest tests[] = {
SUBTEST(sanitycheck),
SUBTEST(find_seqno),
SUBTEST(find_signaled),
SUBTEST(find_out_of_order),
SUBTEST(find_gap),
SUBTEST(find_race),
SUBTEST(signal_forward),
SUBTEST(signal_backward),
SUBTEST(wait_forward),
SUBTEST(wait_backward),
SUBTEST(wait_random),
};
int ret;
pr_info("sizeof(dma_fence_chain)=%zu\n",
sizeof(struct dma_fence_chain));
slab_fences = KMEM_CACHE(mock_fence,
SLAB_TYPESAFE_BY_RCU |
SLAB_HWCACHE_ALIGN);
if (!slab_fences)
return -ENOMEM;
ret = subtests(tests, NULL);
kmem_cache_destroy(slab_fences);
return ret;
}

View File

@ -17,7 +17,8 @@ drm-y := drm_auth.o drm_cache.o \
drm_plane.o drm_color_mgmt.o drm_print.o \
drm_dumb_buffers.o drm_mode_config.o drm_vblank.o \
drm_syncobj.o drm_lease.o drm_writeback.o drm_client.o \
drm_client_modeset.o drm_atomic_uapi.o drm_hdcp.o
drm_client_modeset.o drm_atomic_uapi.o drm_hdcp.o \
drm_managed.o
drm-$(CONFIG_DRM_LEGACY) += drm_legacy_misc.o drm_bufs.o drm_context.o drm_dma.o drm_scatter.o drm_lock.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o
@ -32,8 +33,7 @@ drm-$(CONFIG_PCI) += drm_pci.o
drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
drm_vram_helper-y := drm_gem_vram_helper.o \
drm_vram_helper_common.o
drm_vram_helper-y := drm_gem_vram_helper.o
obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
drm_ttm_helper-y := drm_gem_ttm_helper.o

View File

@ -38,6 +38,7 @@
#include <drm/amdgpu_drm.h>
#include <linux/dma-buf.h>
#include <linux/dma-fence-array.h>
#include <linux/pci-p2pdma.h>
/**
* amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
@ -179,6 +180,9 @@ static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf,
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
int r;
if (pci_p2pdma_distance_many(adev->pdev, &attach->dev, 1, true) < 0)
attach->peer2peer = false;
if (attach->dev->driver == adev->dev->driver)
return 0;
@ -272,14 +276,21 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
struct dma_buf *dma_buf = attach->dmabuf;
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct sg_table *sgt;
long r;
if (!bo->pin_count) {
/* move buffer into GTT */
/* move buffer into GTT or VRAM */
struct ttm_operation_ctx ctx = { false, false };
unsigned domains = AMDGPU_GEM_DOMAIN_GTT;
amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT);
if (bo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM &&
attach->peer2peer) {
bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
domains |= AMDGPU_GEM_DOMAIN_VRAM;
}
amdgpu_bo_placement_from_domain(bo, domains);
r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
if (r)
return ERR_PTR(r);
@ -289,20 +300,34 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
return ERR_PTR(-EBUSY);
}
sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages, bo->tbo.num_pages);
if (IS_ERR(sgt))
return sgt;
switch (bo->tbo.mem.mem_type) {
case TTM_PL_TT:
sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
bo->tbo.num_pages);
if (IS_ERR(sgt))
return sgt;
if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
DMA_ATTR_SKIP_CPU_SYNC))
goto error_free;
if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
DMA_ATTR_SKIP_CPU_SYNC))
goto error_free;
break;
case TTM_PL_VRAM:
r = amdgpu_vram_mgr_alloc_sgt(adev, &bo->tbo.mem, attach->dev,
dir, &sgt);
if (r)
return ERR_PTR(r);
break;
default:
return ERR_PTR(-EINVAL);
}
return sgt;
error_free:
sg_free_table(sgt);
kfree(sgt);
return ERR_PTR(-ENOMEM);
return ERR_PTR(-EBUSY);
}
/**
@ -318,9 +343,18 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach,
struct sg_table *sgt,
enum dma_data_direction dir)
{
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
sg_free_table(sgt);
kfree(sgt);
struct dma_buf *dma_buf = attach->dmabuf;
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
if (sgt->sgl->page_link) {
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
sg_free_table(sgt);
kfree(sgt);
} else {
amdgpu_vram_mgr_free_sgt(adev, attach->dev, dir, sgt);
}
}
/**
@ -514,6 +548,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
}
static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops = {
.allow_peer2peer = true,
.move_notify = amdgpu_dma_buf_move_notify
};

View File

@ -856,7 +856,7 @@ void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
const char *name = pp_lib_thermal_controller_names[controller->ucType];
info.addr = controller->ucI2cAddress >> 1;
strlcpy(info.type, name, sizeof(info.type));
i2c_new_device(&adev->pm.i2c_bus->adapter, &info);
i2c_new_client_device(&adev->pm.i2c_bus->adapter, &info);
}
} else {
DRM_INFO("Unknown thermal controller type %d at 0x%02x %s fan control\n",

View File

@ -29,6 +29,7 @@
#include <linux/module.h>
#include <linux/pagemap.h>
#include <linux/pci.h>
#include <linux/dma-buf.h>
#include <drm/amdgpu_drm.h>
#include <drm/drm_debugfs.h>
@ -854,7 +855,8 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, void *data)
attachment = READ_ONCE(bo->tbo.base.import_attach);
if (attachment)
seq_printf(m, " imported from %p", dma_buf);
seq_printf(m, " imported from %p%s", dma_buf,
attachment->peer2peer ? " P2P" : "");
else if (dma_buf)
seq_printf(m, " exported as %p", dma_buf);

View File

@ -24,8 +24,9 @@
#ifndef __AMDGPU_TTM_H__
#define __AMDGPU_TTM_H__
#include "amdgpu.h"
#include <linux/dma-direction.h>
#include <drm/gpu_scheduler.h>
#include "amdgpu.h"
#define AMDGPU_PL_GDS (TTM_PL_PRIV + 0)
#define AMDGPU_PL_GWS (TTM_PL_PRIV + 1)
@ -74,6 +75,15 @@ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man);
int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo);
int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
struct ttm_mem_reg *mem,
struct device *dev,
enum dma_data_direction dir,
struct sg_table **sgt);
void amdgpu_vram_mgr_free_sgt(struct amdgpu_device *adev,
struct device *dev,
enum dma_data_direction dir,
struct sg_table *sgt);
uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man);
uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);

View File

@ -22,6 +22,7 @@
* Authors: Christian König
*/
#include <linux/dma-mapping.h>
#include "amdgpu.h"
#include "amdgpu_vm.h"
#include "amdgpu_atomfirmware.h"
@ -458,6 +459,104 @@ static void amdgpu_vram_mgr_del(struct ttm_mem_type_manager *man,
mem->mm_node = NULL;
}
/**
* amdgpu_vram_mgr_alloc_sgt - allocate and fill a sg table
*
* @adev: amdgpu device pointer
* @mem: TTM memory object
* @dev: the other device
* @dir: dma direction
* @sgt: resulting sg table
*
* Allocate and fill a sg table from a VRAM allocation.
*/
int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
struct ttm_mem_reg *mem,
struct device *dev,
enum dma_data_direction dir,
struct sg_table **sgt)
{
struct drm_mm_node *node;
struct scatterlist *sg;
int num_entries = 0;
unsigned int pages;
int i, r;
*sgt = kmalloc(sizeof(*sg), GFP_KERNEL);
if (!*sgt)
return -ENOMEM;
for (pages = mem->num_pages, node = mem->mm_node;
pages; pages -= node->size, ++node)
++num_entries;
r = sg_alloc_table(*sgt, num_entries, GFP_KERNEL);
if (r)
goto error_free;
for_each_sg((*sgt)->sgl, sg, num_entries, i)
sg->length = 0;
node = mem->mm_node;
for_each_sg((*sgt)->sgl, sg, num_entries, i) {
phys_addr_t phys = (node->start << PAGE_SHIFT) +
adev->gmc.aper_base;
size_t size = node->size << PAGE_SHIFT;
dma_addr_t addr;
++node;
addr = dma_map_resource(dev, phys, size, dir,
DMA_ATTR_SKIP_CPU_SYNC);
r = dma_mapping_error(dev, addr);
if (r)
goto error_unmap;
sg_set_page(sg, NULL, size, 0);
sg_dma_address(sg) = addr;
sg_dma_len(sg) = size;
}
return 0;
error_unmap:
for_each_sg((*sgt)->sgl, sg, num_entries, i) {
if (!sg->length)
continue;
dma_unmap_resource(dev, sg->dma_address,
sg->length, dir,
DMA_ATTR_SKIP_CPU_SYNC);
}
sg_free_table(*sgt);
error_free:
kfree(*sgt);
return r;
}
/**
* amdgpu_vram_mgr_alloc_sgt - allocate and fill a sg table
*
* @adev: amdgpu device pointer
* @sgt: sg table to free
*
* Free a previously allocate sg table.
*/
void amdgpu_vram_mgr_free_sgt(struct amdgpu_device *adev,
struct device *dev,
enum dma_data_direction dir,
struct sg_table *sgt)
{
struct scatterlist *sg;
int i;
for_each_sg(sgt->sgl, sg, sgt->nents, i)
dma_unmap_resource(dev, sg->dma_address,
sg->length, dir,
DMA_ATTR_SKIP_CPU_SYNC);
sg_free_table(sgt);
kfree(sgt);
}
/**
* amdgpu_vram_mgr_usage - how many bytes are used in this domain
*

View File

@ -136,17 +136,23 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
static void
dm_dp_mst_connector_destroy(struct drm_connector *connector)
{
struct amdgpu_dm_connector *amdgpu_dm_connector = to_amdgpu_dm_connector(connector);
struct amdgpu_encoder *amdgpu_encoder = amdgpu_dm_connector->mst_encoder;
struct amdgpu_dm_connector *aconnector =
to_amdgpu_dm_connector(connector);
struct amdgpu_encoder *amdgpu_encoder = aconnector->mst_encoder;
kfree(amdgpu_dm_connector->edid);
amdgpu_dm_connector->edid = NULL;
if (aconnector->dc_sink) {
dc_link_remove_remote_sink(aconnector->dc_link,
aconnector->dc_sink);
dc_sink_release(aconnector->dc_sink);
}
kfree(aconnector->edid);
drm_encoder_cleanup(&amdgpu_encoder->base);
kfree(amdgpu_encoder);
drm_connector_cleanup(connector);
drm_dp_mst_put_port_malloc(amdgpu_dm_connector->port);
kfree(amdgpu_dm_connector);
drm_dp_mst_put_port_malloc(aconnector->port);
kfree(aconnector);
}
static int
@ -435,40 +441,13 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
*/
amdgpu_dm_connector_funcs_reset(connector);
DRM_INFO("DM_MST: added connector: %p [id: %d] [master: %p]\n",
aconnector, connector->base.id, aconnector->mst_port);
drm_dp_mst_get_port_malloc(port);
DRM_DEBUG_KMS(":%d\n", connector->base.id);
return connector;
}
static void dm_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
struct drm_connector *connector)
{
struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
DRM_INFO("DM_MST: Disabling connector: %p [id: %d] [master: %p]\n",
aconnector, connector->base.id, aconnector->mst_port);
if (aconnector->dc_sink) {
amdgpu_dm_update_freesync_caps(connector, NULL);
dc_link_remove_remote_sink(aconnector->dc_link,
aconnector->dc_sink);
dc_sink_release(aconnector->dc_sink);
aconnector->dc_sink = NULL;
aconnector->dc_link->cur_link_settings.lane_count = 0;
}
drm_connector_unregister(connector);
drm_connector_put(connector);
}
static const struct drm_dp_mst_topology_cbs dm_mst_cbs = {
.add_connector = dm_dp_add_mst_connector,
.destroy_connector = dm_dp_destroy_mst_connector,
};
void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,

View File

@ -137,10 +137,11 @@ static struct drm_info_list arcpgu_debugfs_list[] = {
{ "clocks", arcpgu_show_pxlclock, 0 },
};
static int arcpgu_debugfs_init(struct drm_minor *minor)
static void arcpgu_debugfs_init(struct drm_minor *minor)
{
return drm_debugfs_create_files(arcpgu_debugfs_list,
ARRAY_SIZE(arcpgu_debugfs_list), minor->debugfs_root, minor);
drm_debugfs_create_files(arcpgu_debugfs_list,
ARRAY_SIZE(arcpgu_debugfs_list),
minor->debugfs_root, minor);
}
#endif

View File

@ -14,6 +14,7 @@
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_irq.h>
#include <drm/drm_managed.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
@ -271,6 +272,7 @@ struct komeda_kms_dev *komeda_kms_attach(struct komeda_dev *mdev)
err = drm_dev_init(drm, &komeda_kms_driver, mdev->dev);
if (err)
goto free_kms;
drmm_add_final_kfree(drm, kms);
drm->dev_private = mdev;

View File

@ -224,10 +224,11 @@ static struct drm_info_list hdlcd_debugfs_list[] = {
{ "clocks", hdlcd_show_pxlclock, 0 },
};
static int hdlcd_debugfs_init(struct drm_minor *minor)
static void hdlcd_debugfs_init(struct drm_minor *minor)
{
return drm_debugfs_create_files(hdlcd_debugfs_list,
ARRAY_SIZE(hdlcd_debugfs_list), minor->debugfs_root, minor);
drm_debugfs_create_files(hdlcd_debugfs_list,
ARRAY_SIZE(hdlcd_debugfs_list),
minor->debugfs_root, minor);
}
#endif

View File

@ -548,7 +548,7 @@ static const struct file_operations malidp_debugfs_fops = {
.release = single_release,
};
static int malidp_debugfs_init(struct drm_minor *minor)
static void malidp_debugfs_init(struct drm_minor *minor)
{
struct malidp_drm *malidp = minor->dev->dev_private;
@ -557,7 +557,6 @@ static int malidp_debugfs_init(struct drm_minor *minor)
spin_lock_init(&malidp->errors_lock);
debugfs_create_file("debug", S_IRUGO | S_IWUSR, minor->debugfs_root,
minor->dev, &malidp_debugfs_fops);
return 0;
}
#endif //CONFIG_DEBUG_FS

View File

@ -12,6 +12,7 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_managed.h>
#include <drm/drm_prime.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_fb_helper.h>
@ -103,6 +104,7 @@ static int armada_drm_bind(struct device *dev)
kfree(priv);
return ret;
}
drmm_add_final_kfree(&priv->drm, priv);
/* Remove early framebuffers */
ret = drm_fb_helper_remove_conflicting_framebuffers(NULL,

View File

@ -32,6 +32,7 @@
#include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_probe_helper.h>
@ -111,6 +112,8 @@ static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto err_ast_driver_unload;
drm_fbdev_generic_setup(dev, 32);
return 0;
err_ast_driver_unload:

View File

@ -30,7 +30,6 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_gem_vram_helper.h>
@ -512,10 +511,6 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
drm_mode_config_reset(dev);
ret = drm_fbdev_generic_setup(dev, 32);
if (ret)
goto out_free;
return 0;
out_free:
kfree(ast);

View File

@ -11,9 +11,10 @@
#include <linux/media-bus-format.h>
#include <linux/of_graph.h>
#include <drm/drm_bridge.h>
#include <drm/drm_encoder.h>
#include <drm/drm_of.h>
#include <drm/drm_bridge.h>
#include <drm/drm_simple_kms_helper.h>
#include "atmel_hlcdc_dc.h"
@ -22,10 +23,6 @@ struct atmel_hlcdc_rgb_output {
int bus_fmt;
};
static const struct drm_encoder_funcs atmel_hlcdc_panel_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
static struct atmel_hlcdc_rgb_output *
atmel_hlcdc_encoder_to_rgb_output(struct drm_encoder *encoder)
{
@ -98,9 +95,8 @@ static int atmel_hlcdc_attach_endpoint(struct drm_device *dev, int endpoint)
return -EINVAL;
}
ret = drm_encoder_init(dev, &output->encoder,
&atmel_hlcdc_panel_encoder_funcs,
DRM_MODE_ENCODER_NONE, NULL);
ret = drm_simple_encoder_init(dev, &output->encoder,
DRM_MODE_ENCODER_NONE);
if (ret)
return ret;

View File

@ -92,7 +92,6 @@ void bochs_mm_fini(struct bochs_device *bochs);
/* bochs_kms.c */
int bochs_kms_init(struct bochs_device *bochs);
void bochs_kms_fini(struct bochs_device *bochs);
/* bochs_fbdev.c */
extern const struct drm_mode_config_funcs bochs_mode_funcs;

View File

@ -7,6 +7,7 @@
#include <drm/drm_drv.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_managed.h>
#include "bochs.h"
@ -21,10 +22,7 @@ static void bochs_unload(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
bochs_kms_fini(bochs);
bochs_mm_fini(bochs);
kfree(bochs);
dev->dev_private = NULL;
}
static int bochs_load(struct drm_device *dev)
@ -32,7 +30,7 @@ static int bochs_load(struct drm_device *dev)
struct bochs_device *bochs;
int ret;
bochs = kzalloc(sizeof(*bochs), GFP_KERNEL);
bochs = drmm_kzalloc(dev, sizeof(*bochs), GFP_KERNEL);
if (bochs == NULL)
return -ENOMEM;
dev->dev_private = bochs;

View File

@ -134,7 +134,11 @@ const struct drm_mode_config_funcs bochs_mode_funcs = {
int bochs_kms_init(struct bochs_device *bochs)
{
drm_mode_config_init(bochs->dev);
int ret;
ret = drmm_mode_config_init(bochs->dev);
if (ret)
return ret;
bochs->dev->mode_config.max_width = 8192;
bochs->dev->mode_config.max_height = 8192;
@ -160,12 +164,3 @@ int bochs_kms_init(struct bochs_device *bochs)
return 0;
}
void bochs_kms_fini(struct bochs_device *bochs)
{
if (!bochs->dev->mode_config.num_connector)
return;
drm_atomic_helper_shutdown(bochs->dev);
drm_mode_config_cleanup(bochs->dev);
}

View File

@ -58,6 +58,22 @@ config DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW
to DP++. This is used with the i.MX6 imx-ldb
driver. You are likely to say N here.
config DRM_NWL_MIPI_DSI
tristate "Northwest Logic MIPI DSI Host controller"
depends on DRM
depends on COMMON_CLK
depends on OF && HAS_IOMEM
select DRM_KMS_HELPER
select DRM_MIPI_DSI
select DRM_PANEL_BRIDGE
select GENERIC_PHY_MIPI_DPHY
select MFD_SYSCON
select MULTIPLEXER
select REGMAP_MMIO
help
This enables the Northwest Logic MIPI DSI Host controller as
for example found on NXP's i.MX8 Processors.
config DRM_NXP_PTN3460
tristate "NXP PTN3460 DP/LVDS bridge"
depends on OF

View File

@ -18,6 +18,7 @@ obj-$(CONFIG_DRM_I2C_ADV7511) += adv7511/
obj-$(CONFIG_DRM_TI_SN65DSI86) += ti-sn65dsi86.o
obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o
obj-$(CONFIG_DRM_TI_TPD12S015) += ti-tpd12s015.o
obj-$(CONFIG_DRM_NWL_MIPI_DSI) += nwl-dsi.o
obj-y += analogix/
obj-y += synopsys/

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,144 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* NWL MIPI DSI host driver
*
* Copyright (C) 2017 NXP
* Copyright (C) 2019 Purism SPC
*/
#ifndef __NWL_DSI_H__
#define __NWL_DSI_H__
/* DSI HOST registers */
#define NWL_DSI_CFG_NUM_LANES 0x0
#define NWL_DSI_CFG_NONCONTINUOUS_CLK 0x4
#define NWL_DSI_CFG_T_PRE 0x8
#define NWL_DSI_CFG_T_POST 0xc
#define NWL_DSI_CFG_TX_GAP 0x10
#define NWL_DSI_CFG_AUTOINSERT_EOTP 0x14
#define NWL_DSI_CFG_EXTRA_CMDS_AFTER_EOTP 0x18
#define NWL_DSI_CFG_HTX_TO_COUNT 0x1c
#define NWL_DSI_CFG_LRX_H_TO_COUNT 0x20
#define NWL_DSI_CFG_BTA_H_TO_COUNT 0x24
#define NWL_DSI_CFG_TWAKEUP 0x28
#define NWL_DSI_CFG_STATUS_OUT 0x2c
#define NWL_DSI_RX_ERROR_STATUS 0x30
/* DSI DPI registers */
#define NWL_DSI_PIXEL_PAYLOAD_SIZE 0x200
#define NWL_DSI_PIXEL_FIFO_SEND_LEVEL 0x204
#define NWL_DSI_INTERFACE_COLOR_CODING 0x208
#define NWL_DSI_PIXEL_FORMAT 0x20c
#define NWL_DSI_VSYNC_POLARITY 0x210
#define NWL_DSI_VSYNC_POLARITY_ACTIVE_LOW 0
#define NWL_DSI_VSYNC_POLARITY_ACTIVE_HIGH BIT(1)
#define NWL_DSI_HSYNC_POLARITY 0x214
#define NWL_DSI_HSYNC_POLARITY_ACTIVE_LOW 0
#define NWL_DSI_HSYNC_POLARITY_ACTIVE_HIGH BIT(1)
#define NWL_DSI_VIDEO_MODE 0x218
#define NWL_DSI_HFP 0x21c
#define NWL_DSI_HBP 0x220
#define NWL_DSI_HSA 0x224
#define NWL_DSI_ENABLE_MULT_PKTS 0x228
#define NWL_DSI_VBP 0x22c
#define NWL_DSI_VFP 0x230
#define NWL_DSI_BLLP_MODE 0x234
#define NWL_DSI_USE_NULL_PKT_BLLP 0x238
#define NWL_DSI_VACTIVE 0x23c
#define NWL_DSI_VC 0x240
/* DSI APB PKT control */
#define NWL_DSI_TX_PAYLOAD 0x280
#define NWL_DSI_PKT_CONTROL 0x284
#define NWL_DSI_SEND_PACKET 0x288
#define NWL_DSI_PKT_STATUS 0x28c
#define NWL_DSI_PKT_FIFO_WR_LEVEL 0x290
#define NWL_DSI_PKT_FIFO_RD_LEVEL 0x294
#define NWL_DSI_RX_PAYLOAD 0x298
#define NWL_DSI_RX_PKT_HEADER 0x29c
/* DSI IRQ handling */
#define NWL_DSI_IRQ_STATUS 0x2a0
#define NWL_DSI_SM_NOT_IDLE BIT(0)
#define NWL_DSI_TX_PKT_DONE BIT(1)
#define NWL_DSI_DPHY_DIRECTION BIT(2)
#define NWL_DSI_TX_FIFO_OVFLW BIT(3)
#define NWL_DSI_TX_FIFO_UDFLW BIT(4)
#define NWL_DSI_RX_FIFO_OVFLW BIT(5)
#define NWL_DSI_RX_FIFO_UDFLW BIT(6)
#define NWL_DSI_RX_PKT_HDR_RCVD BIT(7)
#define NWL_DSI_RX_PKT_PAYLOAD_DATA_RCVD BIT(8)
#define NWL_DSI_BTA_TIMEOUT BIT(29)
#define NWL_DSI_LP_RX_TIMEOUT BIT(30)
#define NWL_DSI_HS_TX_TIMEOUT BIT(31)
#define NWL_DSI_IRQ_STATUS2 0x2a4
#define NWL_DSI_SINGLE_BIT_ECC_ERR BIT(0)
#define NWL_DSI_MULTI_BIT_ECC_ERR BIT(1)
#define NWL_DSI_CRC_ERR BIT(2)
#define NWL_DSI_IRQ_MASK 0x2a8
#define NWL_DSI_SM_NOT_IDLE_MASK BIT(0)
#define NWL_DSI_TX_PKT_DONE_MASK BIT(1)
#define NWL_DSI_DPHY_DIRECTION_MASK BIT(2)
#define NWL_DSI_TX_FIFO_OVFLW_MASK BIT(3)
#define NWL_DSI_TX_FIFO_UDFLW_MASK BIT(4)
#define NWL_DSI_RX_FIFO_OVFLW_MASK BIT(5)
#define NWL_DSI_RX_FIFO_UDFLW_MASK BIT(6)
#define NWL_DSI_RX_PKT_HDR_RCVD_MASK BIT(7)
#define NWL_DSI_RX_PKT_PAYLOAD_DATA_RCVD_MASK BIT(8)
#define NWL_DSI_BTA_TIMEOUT_MASK BIT(29)
#define NWL_DSI_LP_RX_TIMEOUT_MASK BIT(30)
#define NWL_DSI_HS_TX_TIMEOUT_MASK BIT(31)
#define NWL_DSI_IRQ_MASK2 0x2ac
#define NWL_DSI_SINGLE_BIT_ECC_ERR_MASK BIT(0)
#define NWL_DSI_MULTI_BIT_ECC_ERR_MASK BIT(1)
#define NWL_DSI_CRC_ERR_MASK BIT(2)
/*
* PKT_CONTROL format:
* [15: 0] - word count
* [17:16] - virtual channel
* [23:18] - data type
* [24] - LP or HS select (0 - LP, 1 - HS)
* [25] - perform BTA after packet is sent
* [26] - perform BTA only, no packet tx
*/
#define NWL_DSI_WC(x) FIELD_PREP(GENMASK(15, 0), (x))
#define NWL_DSI_TX_VC(x) FIELD_PREP(GENMASK(17, 16), (x))
#define NWL_DSI_TX_DT(x) FIELD_PREP(GENMASK(23, 18), (x))
#define NWL_DSI_HS_SEL(x) FIELD_PREP(GENMASK(24, 24), (x))
#define NWL_DSI_BTA_TX(x) FIELD_PREP(GENMASK(25, 25), (x))
#define NWL_DSI_BTA_NO_TX(x) FIELD_PREP(GENMASK(26, 26), (x))
/*
* RX_PKT_HEADER format:
* [15: 0] - word count
* [21:16] - data type
* [23:22] - virtual channel
*/
#define NWL_DSI_RX_DT(x) FIELD_GET(GENMASK(21, 16), (x))
#define NWL_DSI_RX_VC(x) FIELD_GET(GENMASK(23, 22), (x))
/* DSI Video mode */
#define NWL_DSI_VM_BURST_MODE_WITH_SYNC_PULSES 0
#define NWL_DSI_VM_NON_BURST_MODE_WITH_SYNC_EVENTS BIT(0)
#define NWL_DSI_VM_BURST_MODE BIT(1)
/* * DPI color coding */
#define NWL_DSI_DPI_16_BIT_565_PACKED 0
#define NWL_DSI_DPI_16_BIT_565_ALIGNED 1
#define NWL_DSI_DPI_16_BIT_565_SHIFTED 2
#define NWL_DSI_DPI_18_BIT_PACKED 3
#define NWL_DSI_DPI_18_BIT_ALIGNED 4
#define NWL_DSI_DPI_24_BIT 5
/* * DPI Pixel format */
#define NWL_DSI_PIXEL_FORMAT_16 0
#define NWL_DSI_PIXEL_FORMAT_18 BIT(0)
#define NWL_DSI_PIXEL_FORMAT_18L BIT(1)
#define NWL_DSI_PIXEL_FORMAT_24 (BIT(0) | BIT(1))
#endif /* __NWL_DSI_H__ */

View File

@ -311,6 +311,7 @@ EXPORT_SYMBOL(devm_drm_panel_bridge_add_typed);
/**
* drm_panel_bridge_connector - return the connector for the panel bridge
* @bridge: The drm_bridge.
*
* drm_panel_bridge creates the connector.
* This function gives external access to the connector.

View File

@ -836,7 +836,8 @@ static int sii9234_init_resources(struct sii9234 *ctx,
ctx->supplies[3].supply = "cvcc12";
ret = devm_regulator_bulk_get(ctx->dev, 4, ctx->supplies);
if (ret) {
dev_err(ctx->dev, "regulator_bulk failed\n");
if (ret != -EPROBE_DEFER)
dev_err(ctx->dev, "regulator_bulk failed\n");
return ret;
}

View File

@ -92,6 +92,12 @@ static const u16 csc_coeff_rgb_in_eitu709[3][4] = {
{ 0x6756, 0x78ab, 0x2000, 0x0200 }
};
static const u16 csc_coeff_rgb_full_to_rgb_limited[3][4] = {
{ 0x1b7c, 0x0000, 0x0000, 0x0020 },
{ 0x0000, 0x1b7c, 0x0000, 0x0020 },
{ 0x0000, 0x0000, 0x1b7c, 0x0020 }
};
struct hdmi_vmode {
bool mdataenablepolarity;
@ -109,6 +115,7 @@ struct hdmi_data_info {
unsigned int pix_repet_factor;
unsigned int hdcp_enable;
struct hdmi_vmode video_mode;
bool rgb_limited_range;
};
struct dw_hdmi_i2c {
@ -956,7 +963,14 @@ static void hdmi_video_sample(struct dw_hdmi *hdmi)
static int is_color_space_conversion(struct dw_hdmi *hdmi)
{
return hdmi->hdmi_data.enc_in_bus_format != hdmi->hdmi_data.enc_out_bus_format;
struct hdmi_data_info *hdmi_data = &hdmi->hdmi_data;
bool is_input_rgb, is_output_rgb;
is_input_rgb = hdmi_bus_fmt_is_rgb(hdmi_data->enc_in_bus_format);
is_output_rgb = hdmi_bus_fmt_is_rgb(hdmi_data->enc_out_bus_format);
return (is_input_rgb != is_output_rgb) ||
(is_input_rgb && is_output_rgb && hdmi_data->rgb_limited_range);
}
static int is_color_space_decimation(struct dw_hdmi *hdmi)
@ -983,28 +997,37 @@ static int is_color_space_interpolation(struct dw_hdmi *hdmi)
return 0;
}
static bool is_csc_needed(struct dw_hdmi *hdmi)
{
return is_color_space_conversion(hdmi) ||
is_color_space_decimation(hdmi) ||
is_color_space_interpolation(hdmi);
}
static void dw_hdmi_update_csc_coeffs(struct dw_hdmi *hdmi)
{
const u16 (*csc_coeff)[3][4] = &csc_coeff_default;
bool is_input_rgb, is_output_rgb;
unsigned i;
u32 csc_scale = 1;
if (is_color_space_conversion(hdmi)) {
if (hdmi_bus_fmt_is_rgb(hdmi->hdmi_data.enc_out_bus_format)) {
if (hdmi->hdmi_data.enc_out_encoding ==
V4L2_YCBCR_ENC_601)
csc_coeff = &csc_coeff_rgb_out_eitu601;
else
csc_coeff = &csc_coeff_rgb_out_eitu709;
} else if (hdmi_bus_fmt_is_rgb(
hdmi->hdmi_data.enc_in_bus_format)) {
if (hdmi->hdmi_data.enc_out_encoding ==
V4L2_YCBCR_ENC_601)
csc_coeff = &csc_coeff_rgb_in_eitu601;
else
csc_coeff = &csc_coeff_rgb_in_eitu709;
csc_scale = 0;
}
is_input_rgb = hdmi_bus_fmt_is_rgb(hdmi->hdmi_data.enc_in_bus_format);
is_output_rgb = hdmi_bus_fmt_is_rgb(hdmi->hdmi_data.enc_out_bus_format);
if (!is_input_rgb && is_output_rgb) {
if (hdmi->hdmi_data.enc_out_encoding == V4L2_YCBCR_ENC_601)
csc_coeff = &csc_coeff_rgb_out_eitu601;
else
csc_coeff = &csc_coeff_rgb_out_eitu709;
} else if (is_input_rgb && !is_output_rgb) {
if (hdmi->hdmi_data.enc_out_encoding == V4L2_YCBCR_ENC_601)
csc_coeff = &csc_coeff_rgb_in_eitu601;
else
csc_coeff = &csc_coeff_rgb_in_eitu709;
csc_scale = 0;
} else if (is_input_rgb && is_output_rgb &&
hdmi->hdmi_data.rgb_limited_range) {
csc_coeff = &csc_coeff_rgb_full_to_rgb_limited;
}
/* The CSC registers are sequential, alternating MSB then LSB */
@ -1614,6 +1637,18 @@ static void hdmi_config_AVI(struct dw_hdmi *hdmi, struct drm_display_mode *mode)
drm_hdmi_avi_infoframe_from_display_mode(&frame,
&hdmi->connector, mode);
if (hdmi_bus_fmt_is_rgb(hdmi->hdmi_data.enc_out_bus_format)) {
drm_hdmi_avi_infoframe_quant_range(&frame, &hdmi->connector,
mode,
hdmi->hdmi_data.rgb_limited_range ?
HDMI_QUANTIZATION_RANGE_LIMITED :
HDMI_QUANTIZATION_RANGE_FULL);
} else {
frame.quantization_range = HDMI_QUANTIZATION_RANGE_DEFAULT;
frame.ycc_quantization_range =
HDMI_YCC_QUANTIZATION_RANGE_LIMITED;
}
if (hdmi_bus_fmt_is_yuv444(hdmi->hdmi_data.enc_out_bus_format))
frame.colorspace = HDMI_COLORSPACE_YUV444;
else if (hdmi_bus_fmt_is_yuv422(hdmi->hdmi_data.enc_out_bus_format))
@ -1654,8 +1689,6 @@ static void hdmi_config_AVI(struct dw_hdmi *hdmi, struct drm_display_mode *mode)
HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
}
frame.scan_mode = HDMI_SCAN_MODE_NONE;
/*
* The Designware IP uses a different byte format from standard
* AVI info frames, though generally the bits are in the correct
@ -2010,18 +2043,19 @@ static void dw_hdmi_enable_video_path(struct dw_hdmi *hdmi)
hdmi_writeb(hdmi, hdmi->mc_clkdis, HDMI_MC_CLKDIS);
/* Enable csc path */
if (is_color_space_conversion(hdmi)) {
if (is_csc_needed(hdmi)) {
hdmi->mc_clkdis &= ~HDMI_MC_CLKDIS_CSCCLK_DISABLE;
hdmi_writeb(hdmi, hdmi->mc_clkdis, HDMI_MC_CLKDIS);
}
/* Enable color space conversion if needed */
if (is_color_space_conversion(hdmi))
hdmi_writeb(hdmi, HDMI_MC_FLOWCTRL_FEED_THROUGH_OFF_CSC_IN_PATH,
HDMI_MC_FLOWCTRL);
else
} else {
hdmi->mc_clkdis |= HDMI_MC_CLKDIS_CSCCLK_DISABLE;
hdmi_writeb(hdmi, hdmi->mc_clkdis, HDMI_MC_CLKDIS);
hdmi_writeb(hdmi, HDMI_MC_FLOWCTRL_FEED_THROUGH_OFF_CSC_BYPASS,
HDMI_MC_FLOWCTRL);
}
}
/* Workaround to clear the overflow condition */
@ -2119,6 +2153,10 @@ static int dw_hdmi_setup(struct dw_hdmi *hdmi, struct drm_display_mode *mode)
if (hdmi->hdmi_data.enc_out_bus_format == MEDIA_BUS_FMT_FIXED)
hdmi->hdmi_data.enc_out_bus_format = MEDIA_BUS_FMT_RGB888_1X24;
hdmi->hdmi_data.rgb_limited_range = hdmi->sink_is_hdmi &&
drm_default_rgb_quant_range(mode) ==
HDMI_QUANTIZATION_RANGE_LIMITED;
hdmi->hdmi_data.pix_repet_factor = 0;
hdmi->hdmi_data.hdcp_enable = 0;
hdmi->hdmi_data.video_mode.mdataenablepolarity = true;

View File

@ -35,6 +35,7 @@
#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_managed.h>
#include <drm/drm_modeset_helper_vtables.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
@ -509,11 +510,15 @@ static const struct drm_mode_config_funcs cirrus_mode_config_funcs = {
.atomic_commit = drm_atomic_helper_commit,
};
static void cirrus_mode_config_init(struct cirrus_device *cirrus)
static int cirrus_mode_config_init(struct cirrus_device *cirrus)
{
struct drm_device *dev = &cirrus->dev;
int ret;
ret = drmm_mode_config_init(dev);
if (ret)
return ret;
drm_mode_config_init(dev);
dev->mode_config.min_width = 0;
dev->mode_config.min_height = 0;
dev->mode_config.max_width = CIRRUS_MAX_PITCH / 2;
@ -521,18 +526,12 @@ static void cirrus_mode_config_init(struct cirrus_device *cirrus)
dev->mode_config.preferred_depth = 16;
dev->mode_config.prefer_shadow = 0;
dev->mode_config.funcs = &cirrus_mode_config_funcs;
return 0;
}
/* ------------------------------------------------------------------ */
static void cirrus_release(struct drm_device *dev)
{
struct cirrus_device *cirrus = dev->dev_private;
drm_mode_config_cleanup(dev);
kfree(cirrus);
}
DEFINE_DRM_GEM_FOPS(cirrus_fops);
static struct drm_driver cirrus_driver = {
@ -546,7 +545,6 @@ static struct drm_driver cirrus_driver = {
.fops = &cirrus_fops,
DRM_GEM_SHMEM_DRIVER_OPS,
.release = cirrus_release,
};
static int cirrus_pci_probe(struct pci_dev *pdev,
@ -560,7 +558,7 @@ static int cirrus_pci_probe(struct pci_dev *pdev,
if (ret)
return ret;
ret = pci_enable_device(pdev);
ret = pcim_enable_device(pdev);
if (ret)
return ret;
@ -571,34 +569,38 @@ static int cirrus_pci_probe(struct pci_dev *pdev,
ret = -ENOMEM;
cirrus = kzalloc(sizeof(*cirrus), GFP_KERNEL);
if (cirrus == NULL)
goto err_pci_release;
return ret;
dev = &cirrus->dev;
ret = drm_dev_init(dev, &cirrus_driver, &pdev->dev);
if (ret)
goto err_free_cirrus;
ret = devm_drm_dev_init(&pdev->dev, dev, &cirrus_driver);
if (ret) {
kfree(cirrus);
return ret;
}
dev->dev_private = cirrus;
drmm_add_final_kfree(dev, cirrus);
ret = -ENOMEM;
cirrus->vram = ioremap(pci_resource_start(pdev, 0),
pci_resource_len(pdev, 0));
cirrus->vram = devm_ioremap(&pdev->dev, pci_resource_start(pdev, 0),
pci_resource_len(pdev, 0));
if (cirrus->vram == NULL)
goto err_dev_put;
return -ENOMEM;
cirrus->mmio = ioremap(pci_resource_start(pdev, 1),
pci_resource_len(pdev, 1));
cirrus->mmio = devm_ioremap(&pdev->dev, pci_resource_start(pdev, 1),
pci_resource_len(pdev, 1));
if (cirrus->mmio == NULL)
goto err_unmap_vram;
return -ENOMEM;
cirrus_mode_config_init(cirrus);
ret = cirrus_mode_config_init(cirrus);
if (ret)
return ret;
ret = cirrus_conn_init(cirrus);
if (ret < 0)
goto err_cleanup;
return ret;
ret = cirrus_pipe_init(cirrus);
if (ret < 0)
goto err_cleanup;
return ret;
drm_mode_config_reset(dev);
@ -606,36 +608,18 @@ static int cirrus_pci_probe(struct pci_dev *pdev,
pci_set_drvdata(pdev, dev);
ret = drm_dev_register(dev, 0);
if (ret)
goto err_cleanup;
return ret;
drm_fbdev_generic_setup(dev, dev->mode_config.preferred_depth);
return 0;
err_cleanup:
drm_mode_config_cleanup(dev);
iounmap(cirrus->mmio);
err_unmap_vram:
iounmap(cirrus->vram);
err_dev_put:
drm_dev_put(dev);
err_free_cirrus:
kfree(cirrus);
err_pci_release:
pci_release_regions(pdev);
return ret;
}
static void cirrus_pci_remove(struct pci_dev *pdev)
{
struct drm_device *dev = pci_get_drvdata(pdev);
struct cirrus_device *cirrus = dev->dev_private;
drm_dev_unplug(dev);
drm_atomic_helper_shutdown(dev);
iounmap(cirrus->mmio);
iounmap(cirrus->vram);
drm_dev_put(dev);
pci_release_regions(pdev);
}
static const struct pci_device_id pciidlist[] = {

View File

@ -1641,10 +1641,10 @@ static const struct drm_info_list drm_atomic_debugfs_list[] = {
{"state", drm_state_info, 0},
};
int drm_atomic_debugfs_init(struct drm_minor *minor)
void drm_atomic_debugfs_init(struct drm_minor *minor)
{
return drm_debugfs_create_files(drm_atomic_debugfs_list,
ARRAY_SIZE(drm_atomic_debugfs_list),
minor->debugfs_root, minor);
drm_debugfs_create_files(drm_atomic_debugfs_list,
ARRAY_SIZE(drm_atomic_debugfs_list),
minor->debugfs_root, minor);
}
#endif

View File

@ -135,6 +135,7 @@ static int drm_set_master(struct drm_device *dev, struct drm_file *fpriv,
}
}
fpriv->was_master = (ret == 0);
return ret;
}
@ -174,17 +175,77 @@ out_err:
return ret;
}
/*
* In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
* CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
* from becoming master and/or failing to release it.
*
* At the same time, the first client (for a given VT) is _always_ master.
* Thus in order for the ioctls to succeed, one had to _explicitly_ run the
* application as root or flip the setuid bit.
*
* If the CAP_SYS_ADMIN was missing, no other client could become master...
* EVER :-( Leading to a) the graphics session dying badly or b) a completely
* locked session.
*
*
* As some point systemd-logind was introduced to orchestrate and delegate
* master as applicable. It does so by opening the fd and passing it to users
* while in itself logind a) does the set/drop master per users' request and
* b) * implicitly drops master on VT switch.
*
* Even though logind looks like the future, there are a few issues:
* - some platforms don't have equivalent (Android, CrOS, some BSDs) so
* root is required _solely_ for SET/DROP MASTER.
* - applications may not be updated to use it,
* - any client which fails to drop master* can DoS the application using
* logind, to a varying degree.
*
* * Either due missing CAP_SYS_ADMIN or simply not calling DROP_MASTER.
*
*
* Here we implement the next best thing:
* - ensure the logind style of fd passing works unchanged, and
* - allow a client to drop/set master, iff it is/was master at a given point
* in time.
*
* Note: DROP_MASTER cannot be free for all, as an arbitrator user could:
* - DoS/crash the arbitrator - details would be implementation specific
* - open the node, become master implicitly and cause issues
*
* As a result this fixes the following when using root-less build w/o logind
* - startx
* - weston
* - various compositors based on wlroots
*/
static int
drm_master_check_perm(struct drm_device *dev, struct drm_file *file_priv)
{
if (file_priv->pid == task_pid(current) && file_priv->was_master)
return 0;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
return 0;
}
int drm_setmaster_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
int ret = 0;
mutex_lock(&dev->master_mutex);
ret = drm_master_check_perm(dev, file_priv);
if (ret)
goto out_unlock;
if (drm_is_current_master(file_priv))
goto out_unlock;
if (dev->master) {
ret = -EINVAL;
ret = -EBUSY;
goto out_unlock;
}
@ -224,6 +285,12 @@ int drm_dropmaster_ioctl(struct drm_device *dev, void *data,
int ret = -EINVAL;
mutex_lock(&dev->master_mutex);
ret = drm_master_check_perm(dev, file_priv);
if (ret)
goto out_unlock;
ret = -EINVAL;
if (!drm_is_current_master(file_priv))
goto out_unlock;

View File

@ -183,6 +183,12 @@
* plane does not expose the "alpha" property, then this is
* assumed to be 1.0
*
* IN_FORMATS:
* Blob property which contains the set of buffer format and modifier
* pairs supported by this plane. The blob is a drm_format_modifier_blob
* struct. Without this property the plane doesn't support buffers with
* modifiers. Userspace cannot change this property.
*
* Note that all the property extensions described here apply either to the
* plane or the CRTC (e.g. for the background color, which currently is not
* exposed and assumed to be black).

View File

@ -33,6 +33,7 @@
#include <linux/mm.h>
#include <linux/mman.h>
#include <linux/nospec.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/vmalloc.h>
@ -43,7 +44,6 @@
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_pci.h>
#include <drm/drm_print.h>
#include "drm_legacy.h"

View File

@ -457,10 +457,10 @@ static const struct drm_info_list drm_client_debugfs_list[] = {
{ "internal_clients", drm_client_debugfs_internal_clients, 0 },
};
int drm_client_debugfs_init(struct drm_minor *minor)
void drm_client_debugfs_init(struct drm_minor *minor)
{
return drm_debugfs_create_files(drm_client_debugfs_list,
ARRAY_SIZE(drm_client_debugfs_list),
minor->debugfs_root, minor);
drm_debugfs_create_files(drm_client_debugfs_list,
ARRAY_SIZE(drm_client_debugfs_list),
minor->debugfs_root, minor);
}
#endif

View File

@ -1970,6 +1970,8 @@ int drm_connector_update_edid_property(struct drm_connector *connector,
else
drm_reset_display_info(connector);
drm_update_tile_info(connector, edid);
drm_object_property_set_value(&connector->base,
dev->mode_config.non_desktop_property,
connector->display_info.non_desktop);
@ -2392,7 +2394,7 @@ EXPORT_SYMBOL(drm_mode_put_tile_group);
* tile group or NULL if not found.
*/
struct drm_tile_group *drm_mode_get_tile_group(struct drm_device *dev,
char topology[8])
const char topology[8])
{
struct drm_tile_group *tg;
int id;
@ -2422,7 +2424,7 @@ EXPORT_SYMBOL(drm_mode_get_tile_group);
* new tile group or NULL.
*/
struct drm_tile_group *drm_mode_create_tile_group(struct drm_device *dev,
char topology[8])
const char topology[8])
{
struct drm_tile_group *tg;
int ret;

View File

@ -82,6 +82,7 @@ int drm_mode_setcrtc(struct drm_device *dev,
/* drm_mode_config.c */
int drm_modeset_register_all(struct drm_device *dev);
void drm_modeset_unregister_all(struct drm_device *dev);
void drm_mode_config_validate(struct drm_device *dev);
/* drm_modes.c */
const char *drm_get_mode_status_name(enum drm_mode_status status);
@ -224,7 +225,7 @@ int drm_mode_dirtyfb_ioctl(struct drm_device *dev,
/* drm_atomic.c */
#ifdef CONFIG_DEBUG_FS
struct drm_minor;
int drm_atomic_debugfs_init(struct drm_minor *minor);
void drm_atomic_debugfs_init(struct drm_minor *minor);
#endif
int __drm_atomic_helper_disable_plane(struct drm_plane *plane,
@ -278,3 +279,4 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
void drm_mode_fixup_1366x768(struct drm_display_mode *mode);
void drm_reset_display_info(struct drm_connector *connector);
u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edid);
void drm_update_tile_info(struct drm_connector *connector, const struct edid *edid);

View File

@ -172,8 +172,8 @@ static const struct file_operations drm_debugfs_fops = {
* &struct drm_info_list in the given root directory. These files will be removed
* automatically on drm_debugfs_cleanup().
*/
int drm_debugfs_create_files(const struct drm_info_list *files, int count,
struct dentry *root, struct drm_minor *minor)
void drm_debugfs_create_files(const struct drm_info_list *files, int count,
struct dentry *root, struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
struct drm_info_node *tmp;
@ -199,7 +199,6 @@ int drm_debugfs_create_files(const struct drm_info_list *files, int count,
list_add(&tmp->list, &minor->debugfs_list);
mutex_unlock(&minor->debugfs_lock);
}
return 0;
}
EXPORT_SYMBOL(drm_debugfs_create_files);
@ -208,52 +207,28 @@ int drm_debugfs_init(struct drm_minor *minor, int minor_id,
{
struct drm_device *dev = minor->dev;
char name[64];
int ret;
INIT_LIST_HEAD(&minor->debugfs_list);
mutex_init(&minor->debugfs_lock);
sprintf(name, "%d", minor_id);
minor->debugfs_root = debugfs_create_dir(name, root);
ret = drm_debugfs_create_files(drm_debugfs_list, DRM_DEBUGFS_ENTRIES,
minor->debugfs_root, minor);
if (ret) {
debugfs_remove(minor->debugfs_root);
minor->debugfs_root = NULL;
DRM_ERROR("Failed to create core drm debugfs files\n");
return ret;
}
drm_debugfs_create_files(drm_debugfs_list, DRM_DEBUGFS_ENTRIES,
minor->debugfs_root, minor);
if (drm_drv_uses_atomic_modeset(dev)) {
ret = drm_atomic_debugfs_init(minor);
if (ret) {
DRM_ERROR("Failed to create atomic debugfs files\n");
return ret;
}
drm_atomic_debugfs_init(minor);
}
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = drm_framebuffer_debugfs_init(minor);
if (ret) {
DRM_ERROR("Failed to create framebuffer debugfs file\n");
return ret;
}
drm_framebuffer_debugfs_init(minor);
ret = drm_client_debugfs_init(minor);
if (ret) {
DRM_ERROR("Failed to create client debugfs file\n");
return ret;
}
drm_client_debugfs_init(minor);
}
if (dev->driver->debugfs_init) {
ret = dev->driver->debugfs_init(minor);
if (ret) {
DRM_ERROR("DRM: Driver failed to initialize "
"/sys/kernel/debug/dri.\n");
return ret;
}
}
if (dev->driver->debugfs_init)
dev->driver->debugfs_init(minor);
return 0;
}

View File

@ -34,9 +34,9 @@
*/
#include <linux/export.h>
#include <linux/pci.h>
#include <drm/drm_drv.h>
#include <drm/drm_pci.h>
#include <drm/drm_print.h>
#include "drm_legacy.h"

View File

@ -27,6 +27,7 @@
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/seq_file.h>
#include <linux/iopoll.h>
#if IS_ENABLED(CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS)
#include <linux/stacktrace.h>
@ -687,51 +688,45 @@ static void drm_dp_encode_sideband_reply(struct drm_dp_sideband_msg_reply_body *
raw->cur_len = idx;
}
/* this adds a chunk of msg to the builder to get the final msg */
static bool drm_dp_sideband_msg_build(struct drm_dp_sideband_msg_rx *msg,
u8 *replybuf, u8 replybuflen, bool hdr)
static int drm_dp_sideband_msg_set_header(struct drm_dp_sideband_msg_rx *msg,
struct drm_dp_sideband_msg_hdr *hdr,
u8 hdrlen)
{
/*
* ignore out-of-order messages or messages that are part of a
* failed transaction
*/
if (!hdr->somt && !msg->have_somt)
return false;
/* get length contained in this portion */
msg->curchunk_idx = 0;
msg->curchunk_len = hdr->msg_len;
msg->curchunk_hdrlen = hdrlen;
/* we have already gotten an somt - don't bother parsing */
if (hdr->somt && msg->have_somt)
return false;
if (hdr->somt) {
memcpy(&msg->initial_hdr, hdr,
sizeof(struct drm_dp_sideband_msg_hdr));
msg->have_somt = true;
}
if (hdr->eomt)
msg->have_eomt = true;
return true;
}
/* this adds a chunk of msg to the builder to get the final msg */
static bool drm_dp_sideband_append_payload(struct drm_dp_sideband_msg_rx *msg,
u8 *replybuf, u8 replybuflen)
{
int ret;
u8 crc4;
if (hdr) {
u8 hdrlen;
struct drm_dp_sideband_msg_hdr recv_hdr;
ret = drm_dp_decode_sideband_msg_hdr(&recv_hdr, replybuf, replybuflen, &hdrlen);
if (ret == false) {
print_hex_dump(KERN_DEBUG, "failed hdr", DUMP_PREFIX_NONE, 16, 1, replybuf, replybuflen, false);
return false;
}
/*
* ignore out-of-order messages or messages that are part of a
* failed transaction
*/
if (!recv_hdr.somt && !msg->have_somt)
return false;
/* get length contained in this portion */
msg->curchunk_len = recv_hdr.msg_len;
msg->curchunk_hdrlen = hdrlen;
/* we have already gotten an somt - don't bother parsing */
if (recv_hdr.somt && msg->have_somt)
return false;
if (recv_hdr.somt) {
memcpy(&msg->initial_hdr, &recv_hdr, sizeof(struct drm_dp_sideband_msg_hdr));
msg->have_somt = true;
}
if (recv_hdr.eomt)
msg->have_eomt = true;
/* copy the bytes for the remainder of this header chunk */
msg->curchunk_idx = min(msg->curchunk_len, (u8)(replybuflen - hdrlen));
memcpy(&msg->chunk[0], replybuf + hdrlen, msg->curchunk_idx);
} else {
memcpy(&msg->chunk[msg->curchunk_idx], replybuf, replybuflen);
msg->curchunk_idx += replybuflen;
}
memcpy(&msg->chunk[msg->curchunk_idx], replybuf, replybuflen);
msg->curchunk_idx += replybuflen;
if (msg->curchunk_idx >= msg->curchunk_len) {
/* do CRC */
@ -1060,13 +1055,12 @@ static void build_link_address(struct drm_dp_sideband_msg_tx *msg)
drm_dp_encode_sideband_req(&req, msg);
}
static int build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
static void build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
{
struct drm_dp_sideband_msg_req_body req;
req.req_type = DP_CLEAR_PAYLOAD_ID_TABLE;
drm_dp_encode_sideband_req(&req, msg);
return 0;
}
static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg,
@ -1211,8 +1205,6 @@ static int drm_dp_mst_wait_tx_reply(struct drm_dp_mst_branch *mstb,
txmsg->state == DRM_DP_SIDEBAND_TX_SENT) {
mstb->tx_slots[txmsg->seqno] = NULL;
}
mgr->is_waiting_for_dwn_reply = false;
}
out:
if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) {
@ -1222,7 +1214,6 @@ out:
}
mutex_unlock(&mgr->qlock);
drm_dp_mst_kick_tx(mgr);
return ret;
}
@ -2798,11 +2789,9 @@ static void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)
ret = process_single_tx_qlock(mgr, txmsg, false);
if (ret == 1) {
/* txmsg is sent it should be in the slots now */
mgr->is_waiting_for_dwn_reply = true;
list_del(&txmsg->next);
} else if (ret) {
DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);
mgr->is_waiting_for_dwn_reply = false;
list_del(&txmsg->next);
if (txmsg->seqno != -1)
txmsg->dst->tx_slots[txmsg->seqno] = NULL;
@ -2842,8 +2831,7 @@ static void drm_dp_queue_down_tx(struct drm_dp_mst_topology_mgr *mgr,
drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
}
if (list_is_singular(&mgr->tx_msg_downq) &&
!mgr->is_waiting_for_dwn_reply)
if (list_is_singular(&mgr->tx_msg_downq))
process_single_down_tx_qlock(mgr);
mutex_unlock(&mgr->qlock);
}
@ -3703,31 +3691,67 @@ out_fail:
}
EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume);
static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
struct drm_dp_mst_branch **mstb, int *seqno)
{
int len;
u8 replyblock[32];
int replylen, curreply;
int ret;
u8 hdrlen;
struct drm_dp_sideband_msg_hdr hdr;
struct drm_dp_sideband_msg_rx *msg;
int basereg = up ? DP_SIDEBAND_MSG_UP_REQ_BASE : DP_SIDEBAND_MSG_DOWN_REP_BASE;
msg = up ? &mgr->up_req_recv : &mgr->down_rep_recv;
int basereg = up ? DP_SIDEBAND_MSG_UP_REQ_BASE :
DP_SIDEBAND_MSG_DOWN_REP_BASE;
if (!up)
*mstb = NULL;
*seqno = -1;
len = min(mgr->max_dpcd_transaction_bytes, 16);
ret = drm_dp_dpcd_read(mgr->aux, basereg,
replyblock, len);
ret = drm_dp_dpcd_read(mgr->aux, basereg, replyblock, len);
if (ret != len) {
DRM_DEBUG_KMS("failed to read DPCD down rep %d %d\n", len, ret);
return false;
}
ret = drm_dp_sideband_msg_build(msg, replyblock, len, true);
ret = drm_dp_decode_sideband_msg_hdr(&hdr, replyblock, len, &hdrlen);
if (ret == false) {
print_hex_dump(KERN_DEBUG, "failed hdr", DUMP_PREFIX_NONE, 16,
1, replyblock, len, false);
DRM_DEBUG_KMS("ERROR: failed header\n");
return false;
}
*seqno = hdr.seqno;
if (up) {
msg = &mgr->up_req_recv;
} else {
/* Caller is responsible for giving back this reference */
*mstb = drm_dp_get_mst_branch_device(mgr, hdr.lct, hdr.rad);
if (!*mstb) {
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",
hdr.lct);
return false;
}
msg = &(*mstb)->down_rep_recv[hdr.seqno];
}
if (!drm_dp_sideband_msg_set_header(msg, &hdr, hdrlen)) {
DRM_DEBUG_KMS("sideband msg set header failed %d\n",
replyblock[0]);
return false;
}
replylen = min(msg->curchunk_len, (u8)(len - hdrlen));
ret = drm_dp_sideband_append_payload(msg, replyblock + hdrlen, replylen);
if (!ret) {
DRM_DEBUG_KMS("sideband msg build failed %d\n", replyblock[0]);
return false;
}
replylen = msg->curchunk_len + msg->curchunk_hdrlen;
replylen -= len;
replylen = msg->curchunk_len + msg->curchunk_hdrlen - len;
curreply = len;
while (replylen > 0) {
len = min3(replylen, mgr->max_dpcd_transaction_bytes, 16);
@ -3739,7 +3763,7 @@ static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
return false;
}
ret = drm_dp_sideband_msg_build(msg, replyblock, len, false);
ret = drm_dp_sideband_append_payload(msg, replyblock, len);
if (!ret) {
DRM_DEBUG_KMS("failed to build sideband msg\n");
return false;
@ -3754,67 +3778,63 @@ static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
{
struct drm_dp_sideband_msg_tx *txmsg;
struct drm_dp_mst_branch *mstb;
struct drm_dp_sideband_msg_hdr *hdr = &mgr->down_rep_recv.initial_hdr;
int slot = -1;
struct drm_dp_mst_branch *mstb = NULL;
struct drm_dp_sideband_msg_rx *msg = NULL;
int seqno = -1;
if (!drm_dp_get_one_sb_msg(mgr, false))
goto clear_down_rep_recv;
if (!drm_dp_get_one_sb_msg(mgr, false, &mstb, &seqno))
goto out_clear_reply;
if (!mgr->down_rep_recv.have_eomt)
return 0;
msg = &mstb->down_rep_recv[seqno];
mstb = drm_dp_get_mst_branch_device(mgr, hdr->lct, hdr->rad);
if (!mstb) {
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",
hdr->lct);
goto clear_down_rep_recv;
}
/* Multi-packet message transmission, don't clear the reply */
if (!msg->have_eomt)
goto out;
/* find the message */
slot = hdr->seqno;
mutex_lock(&mgr->qlock);
txmsg = mstb->tx_slots[slot];
txmsg = mstb->tx_slots[seqno];
/* remove from slots */
mutex_unlock(&mgr->qlock);
if (!txmsg) {
struct drm_dp_sideband_msg_hdr *hdr;
hdr = &msg->initial_hdr;
DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n",
mstb, hdr->seqno, hdr->lct, hdr->rad[0],
mgr->down_rep_recv.msg[0]);
goto no_msg;
msg->msg[0]);
goto out_clear_reply;
}
drm_dp_sideband_parse_reply(&mgr->down_rep_recv, &txmsg->reply);
drm_dp_sideband_parse_reply(msg, &txmsg->reply);
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK)
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
DRM_DEBUG_KMS("Got NAK reply: req 0x%02x (%s), reason 0x%02x (%s), nak data 0x%02x\n",
txmsg->reply.req_type,
drm_dp_mst_req_type_str(txmsg->reply.req_type),
txmsg->reply.u.nak.reason,
drm_dp_mst_nak_reason_str(txmsg->reply.u.nak.reason),
txmsg->reply.u.nak.nak_data);
}
memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
memset(msg, 0, sizeof(struct drm_dp_sideband_msg_rx));
drm_dp_mst_topology_put_mstb(mstb);
mutex_lock(&mgr->qlock);
txmsg->state = DRM_DP_SIDEBAND_TX_RX;
mstb->tx_slots[slot] = NULL;
mgr->is_waiting_for_dwn_reply = false;
mstb->tx_slots[seqno] = NULL;
mutex_unlock(&mgr->qlock);
wake_up_all(&mgr->tx_waitq);
return 0;
no_msg:
drm_dp_mst_topology_put_mstb(mstb);
clear_down_rep_recv:
mutex_lock(&mgr->qlock);
mgr->is_waiting_for_dwn_reply = false;
mutex_unlock(&mgr->qlock);
memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
out_clear_reply:
if (msg)
memset(msg, 0, sizeof(struct drm_dp_sideband_msg_rx));
out:
if (mstb)
drm_dp_mst_topology_put_mstb(mstb);
return 0;
}
@ -3890,11 +3910,10 @@ static void drm_dp_mst_up_req_work(struct work_struct *work)
static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
{
struct drm_dp_sideband_msg_hdr *hdr = &mgr->up_req_recv.initial_hdr;
struct drm_dp_pending_up_req *up_req;
bool seqno;
int seqno;
if (!drm_dp_get_one_sb_msg(mgr, true))
if (!drm_dp_get_one_sb_msg(mgr, true, NULL, &seqno))
goto out;
if (!mgr->up_req_recv.have_eomt)
@ -3907,7 +3926,6 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
}
INIT_LIST_HEAD(&up_req->next);
seqno = hdr->seqno;
drm_dp_sideband_parse_req(&mgr->up_req_recv, &up_req->msg);
if (up_req->msg.req_type != DP_CONNECTION_STATUS_NOTIFY &&
@ -3941,7 +3959,7 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
res_stat->available_pbn);
}
up_req->hdr = *hdr;
up_req->hdr = mgr->up_req_recv.initial_hdr;
mutex_lock(&mgr->up_req_lock);
list_add_tail(&up_req->next, &mgr->up_req_list);
mutex_unlock(&mgr->up_req_lock);
@ -4046,27 +4064,6 @@ out:
}
EXPORT_SYMBOL(drm_dp_mst_detect_port);
/**
* drm_dp_mst_port_has_audio() - Check whether port has audio capability or not
* @mgr: manager for this port
* @port: unverified pointer to a port.
*
* This returns whether the port supports audio or not.
*/
bool drm_dp_mst_port_has_audio(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_port *port)
{
bool ret = false;
port = drm_dp_mst_topology_get_port_validated(mgr, port);
if (!port)
return ret;
ret = port->has_audio;
drm_dp_mst_topology_put_port(port);
return ret;
}
EXPORT_SYMBOL(drm_dp_mst_port_has_audio);
/**
* drm_dp_mst_get_edid() - get EDID for an MST port
* @connector: toplevel connector to get EDID for
@ -4443,42 +4440,58 @@ fail:
return ret;
}
static int do_get_act_status(struct drm_dp_aux *aux)
{
int ret;
u8 status;
ret = drm_dp_dpcd_readb(aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
if (ret < 0)
return ret;
return status;
}
/**
* drm_dp_check_act_status() - Check ACT handled status.
* drm_dp_check_act_status() - Polls for ACT handled status.
* @mgr: manager to use
*
* Check the payload status bits in the DPCD for ACT handled completion.
* Tries waiting for the MST hub to finish updating it's payload table by
* polling for the ACT handled bit for up to 3 seconds (yes-some hubs really
* take that long).
*
* Returns:
* 0 if the ACT was handled in time, negative error code on failure.
*/
int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)
{
u8 status;
int ret;
int count = 0;
/*
* There doesn't seem to be any recommended retry count or timeout in
* the MST specification. Since some hubs have been observed to take
* over 1 second to update their payload allocations under certain
* conditions, we use a rather large timeout value.
*/
const int timeout_ms = 3000;
int ret, status;
do {
ret = drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
if (ret < 0) {
DRM_DEBUG_KMS("failed to read payload table status %d\n", ret);
goto fail;
}
if (status & DP_PAYLOAD_ACT_HANDLED)
break;
count++;
udelay(100);
} while (count < 30);
if (!(status & DP_PAYLOAD_ACT_HANDLED)) {
DRM_DEBUG_KMS("failed to get ACT bit %d after %d retries\n", status, count);
ret = -EINVAL;
goto fail;
ret = readx_poll_timeout(do_get_act_status, mgr->aux, status,
status & DP_PAYLOAD_ACT_HANDLED || status < 0,
200, timeout_ms * USEC_PER_MSEC);
if (ret < 0 && status >= 0) {
DRM_ERROR("Failed to get ACT after %dms, last status: %02x\n",
timeout_ms, status);
return -EINVAL;
} else if (status < 0) {
/*
* Failure here isn't unexpected - the hub may have
* just been unplugged
*/
DRM_DEBUG_KMS("Failed to read payload table status: %d\n",
status);
return status;
}
return 0;
fail:
return ret;
}
EXPORT_SYMBOL(drm_dp_check_act_status);
@ -4669,28 +4682,18 @@ static void drm_dp_tx_work(struct work_struct *work)
struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, tx_work);
mutex_lock(&mgr->qlock);
if (!list_empty(&mgr->tx_msg_downq) && !mgr->is_waiting_for_dwn_reply)
if (!list_empty(&mgr->tx_msg_downq))
process_single_down_tx_qlock(mgr);
mutex_unlock(&mgr->qlock);
}
static inline void drm_dp_destroy_connector(struct drm_dp_mst_port *port)
{
if (!port->connector)
return;
if (port->mgr->cbs->destroy_connector) {
port->mgr->cbs->destroy_connector(port->mgr, port->connector);
} else {
drm_connector_unregister(port->connector);
drm_connector_put(port->connector);
}
}
static inline void
drm_dp_delayed_destroy_port(struct drm_dp_mst_port *port)
{
drm_dp_destroy_connector(port);
if (port->connector) {
drm_connector_unregister(port->connector);
drm_connector_put(port->connector);
}
drm_dp_port_set_pdt(port, DP_PEER_DEVICE_NONE, port->mcs);
drm_dp_mst_put_port_malloc(port);

View File

@ -39,6 +39,7 @@
#include <drm/drm_color_mgmt.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_managed.h>
#include <drm/drm_mode_object.h>
#include <drm/drm_print.h>
@ -92,13 +93,27 @@ static struct drm_minor **drm_minor_get_slot(struct drm_device *dev,
}
}
static void drm_minor_alloc_release(struct drm_device *dev, void *data)
{
struct drm_minor *minor = data;
unsigned long flags;
WARN_ON(dev != minor->dev);
put_device(minor->kdev);
spin_lock_irqsave(&drm_minor_lock, flags);
idr_remove(&drm_minors_idr, minor->index);
spin_unlock_irqrestore(&drm_minor_lock, flags);
}
static int drm_minor_alloc(struct drm_device *dev, unsigned int type)
{
struct drm_minor *minor;
unsigned long flags;
int r;
minor = kzalloc(sizeof(*minor), GFP_KERNEL);
minor = drmm_kzalloc(dev, sizeof(*minor), GFP_KERNEL);
if (!minor)
return -ENOMEM;
@ -116,46 +131,20 @@ static int drm_minor_alloc(struct drm_device *dev, unsigned int type)
idr_preload_end();
if (r < 0)
goto err_free;
return r;
minor->index = r;
r = drmm_add_action_or_reset(dev, drm_minor_alloc_release, minor);
if (r)
return r;
minor->kdev = drm_sysfs_minor_alloc(minor);
if (IS_ERR(minor->kdev)) {
r = PTR_ERR(minor->kdev);
goto err_index;
}
if (IS_ERR(minor->kdev))
return PTR_ERR(minor->kdev);
*drm_minor_get_slot(dev, type) = minor;
return 0;
err_index:
spin_lock_irqsave(&drm_minor_lock, flags);
idr_remove(&drm_minors_idr, minor->index);
spin_unlock_irqrestore(&drm_minor_lock, flags);
err_free:
kfree(minor);
return r;
}
static void drm_minor_free(struct drm_device *dev, unsigned int type)
{
struct drm_minor **slot, *minor;
unsigned long flags;
slot = drm_minor_get_slot(dev, type);
minor = *slot;
if (!minor)
return;
put_device(minor->kdev);
spin_lock_irqsave(&drm_minor_lock, flags);
idr_remove(&drm_minors_idr, minor->index);
spin_unlock_irqrestore(&drm_minor_lock, flags);
kfree(minor);
*slot = NULL;
}
static int drm_minor_register(struct drm_device *dev, unsigned int type)
@ -270,17 +259,22 @@ void drm_minor_release(struct drm_minor *minor)
* any other resources allocated at device initialization and drop the driver's
* reference to &drm_device using drm_dev_put().
*
* Note that the lifetime rules for &drm_device instance has still a lot of
* historical baggage. Hence use the reference counting provided by
* drm_dev_get() and drm_dev_put() only carefully.
* Note that any allocation or resource which is visible to userspace must be
* released only when the final drm_dev_put() is called, and not when the
* driver is unbound from the underlying physical struct &device. Best to use
* &drm_device managed resources with drmm_add_action(), drmm_kmalloc() and
* related functions.
*
* devres managed resources like devm_kmalloc() can only be used for resources
* directly related to the underlying hardware device, and only used in code
* paths fully protected by drm_dev_enter() and drm_dev_exit().
*
* Display driver example
* ~~~~~~~~~~~~~~~~~~~~~~
*
* The following example shows a typical structure of a DRM display driver.
* The example focus on the probe() function and the other functions that is
* almost always present and serves as a demonstration of devm_drm_dev_init()
* usage with its accompanying drm_driver->release callback.
* almost always present and serves as a demonstration of devm_drm_dev_init().
*
* .. code-block:: c
*
@ -290,19 +284,8 @@ void drm_minor_release(struct drm_minor *minor)
* struct clk *pclk;
* };
*
* static void driver_drm_release(struct drm_device *drm)
* {
* struct driver_device *priv = container_of(...);
*
* drm_mode_config_cleanup(drm);
* drm_dev_fini(drm);
* kfree(priv->userspace_facing);
* kfree(priv);
* }
*
* static struct drm_driver driver_drm_driver = {
* [...]
* .release = driver_drm_release,
* };
*
* static int driver_probe(struct platform_device *pdev)
@ -322,13 +305,16 @@ void drm_minor_release(struct drm_minor *minor)
*
* ret = devm_drm_dev_init(&pdev->dev, drm, &driver_drm_driver);
* if (ret) {
* kfree(drm);
* kfree(priv);
* return ret;
* }
* drmm_add_final_kfree(drm, priv);
*
* drm_mode_config_init(drm);
* ret = drmm_mode_config_init(drm);
* if (ret)
* return ret;
*
* priv->userspace_facing = kzalloc(..., GFP_KERNEL);
* priv->userspace_facing = drmm_kzalloc(..., GFP_KERNEL);
* if (!priv->userspace_facing)
* return -ENOMEM;
*
@ -580,6 +566,23 @@ static void drm_fs_inode_free(struct inode *inode)
* used.
*/
static void drm_dev_init_release(struct drm_device *dev, void *res)
{
drm_legacy_ctxbitmap_cleanup(dev);
drm_legacy_remove_map_hash(dev);
drm_fs_inode_free(dev->anon_inode);
put_device(dev->dev);
/* Prevent use-after-free in drm_managed_release when debugging is
* enabled. Slightly awkward, but can't really be helped. */
dev->dev = NULL;
mutex_destroy(&dev->master_mutex);
mutex_destroy(&dev->clientlist_mutex);
mutex_destroy(&dev->filelist_mutex);
mutex_destroy(&dev->struct_mutex);
drm_legacy_destroy_members(dev);
}
/**
* drm_dev_init - Initialise new DRM device
* @dev: DRM device
@ -608,6 +611,9 @@ static void drm_fs_inode_free(struct inode *inode)
* arbitrary offset, you must supply a &drm_driver.release callback and control
* the finalization explicitly.
*
* Note that drivers must call drmm_add_final_kfree() after this function has
* completed successfully.
*
* RETURNS:
* 0 on success, or error code on failure.
*/
@ -629,6 +635,9 @@ int drm_dev_init(struct drm_device *dev,
dev->dev = get_device(parent);
dev->driver = driver;
INIT_LIST_HEAD(&dev->managed.resources);
spin_lock_init(&dev->managed.lock);
/* no per-device feature limits by default */
dev->driver_features = ~0u;
@ -644,26 +653,30 @@ int drm_dev_init(struct drm_device *dev,
mutex_init(&dev->clientlist_mutex);
mutex_init(&dev->master_mutex);
ret = drmm_add_action(dev, drm_dev_init_release, NULL);
if (ret)
return ret;
dev->anon_inode = drm_fs_inode_new();
if (IS_ERR(dev->anon_inode)) {
ret = PTR_ERR(dev->anon_inode);
DRM_ERROR("Cannot allocate anonymous inode: %d\n", ret);
goto err_free;
goto err;
}
if (drm_core_check_feature(dev, DRIVER_RENDER)) {
ret = drm_minor_alloc(dev, DRM_MINOR_RENDER);
if (ret)
goto err_minors;
goto err;
}
ret = drm_minor_alloc(dev, DRM_MINOR_PRIMARY);
if (ret)
goto err_minors;
goto err;
ret = drm_legacy_create_map_hash(dev);
if (ret)
goto err_minors;
goto err;
drm_legacy_ctxbitmap_init(dev);
@ -671,33 +684,19 @@ int drm_dev_init(struct drm_device *dev,
ret = drm_gem_init(dev);
if (ret) {
DRM_ERROR("Cannot initialize graphics execution manager (GEM)\n");
goto err_ctxbitmap;
goto err;
}
}
ret = drm_dev_set_unique(dev, dev_name(parent));
if (ret)
goto err_setunique;
goto err;
return 0;
err_setunique:
if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_destroy(dev);
err_ctxbitmap:
drm_legacy_ctxbitmap_cleanup(dev);
drm_legacy_remove_map_hash(dev);
err_minors:
drm_minor_free(dev, DRM_MINOR_PRIMARY);
drm_minor_free(dev, DRM_MINOR_RENDER);
drm_fs_inode_free(dev->anon_inode);
err_free:
put_device(dev->dev);
mutex_destroy(&dev->master_mutex);
mutex_destroy(&dev->clientlist_mutex);
mutex_destroy(&dev->filelist_mutex);
mutex_destroy(&dev->struct_mutex);
drm_legacy_destroy_members(dev);
err:
drm_managed_release(dev);
return ret;
}
EXPORT_SYMBOL(drm_dev_init);
@ -714,8 +713,10 @@ static void devm_drm_dev_init_release(void *data)
* @driver: DRM driver
*
* Managed drm_dev_init(). The DRM device initialized with this function is
* automatically put on driver detach using drm_dev_put(). You must supply a
* &drm_driver.release callback to control the finalization explicitly.
* automatically put on driver detach using drm_dev_put().
*
* Note that drivers must call drmm_add_final_kfree() after this function has
* completed successfully.
*
* RETURNS:
* 0 on success, or error code on failure.
@ -726,9 +727,6 @@ int devm_drm_dev_init(struct device *parent,
{
int ret;
if (WARN_ON(!driver->release))
return -EINVAL;
ret = drm_dev_init(dev, driver, parent);
if (ret)
return ret;
@ -741,43 +739,6 @@ int devm_drm_dev_init(struct device *parent,
}
EXPORT_SYMBOL(devm_drm_dev_init);
/**
* drm_dev_fini - Finalize a dead DRM device
* @dev: DRM device
*
* Finalize a dead DRM device. This is the converse to drm_dev_init() and
* frees up all data allocated by it. All driver private data should be
* finalized first. Note that this function does not free the @dev, that is
* left to the caller.
*
* The ref-count of @dev must be zero, and drm_dev_fini() should only be called
* from a &drm_driver.release callback.
*/
void drm_dev_fini(struct drm_device *dev)
{
drm_vblank_cleanup(dev);
if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_destroy(dev);
drm_legacy_ctxbitmap_cleanup(dev);
drm_legacy_remove_map_hash(dev);
drm_fs_inode_free(dev->anon_inode);
drm_minor_free(dev, DRM_MINOR_PRIMARY);
drm_minor_free(dev, DRM_MINOR_RENDER);
put_device(dev->dev);
mutex_destroy(&dev->master_mutex);
mutex_destroy(&dev->clientlist_mutex);
mutex_destroy(&dev->filelist_mutex);
mutex_destroy(&dev->struct_mutex);
drm_legacy_destroy_members(dev);
kfree(dev->unique);
}
EXPORT_SYMBOL(drm_dev_fini);
/**
* drm_dev_alloc - Allocate new DRM device
* @driver: DRM driver to allocate device for
@ -816,6 +777,8 @@ struct drm_device *drm_dev_alloc(struct drm_driver *driver,
return ERR_PTR(ret);
}
drmm_add_final_kfree(dev, dev);
return dev;
}
EXPORT_SYMBOL(drm_dev_alloc);
@ -824,12 +787,13 @@ static void drm_dev_release(struct kref *ref)
{
struct drm_device *dev = container_of(ref, struct drm_device, ref);
if (dev->driver->release) {
if (dev->driver->release)
dev->driver->release(dev);
} else {
drm_dev_fini(dev);
kfree(dev);
}
drm_managed_release(dev);
if (dev->managed.final_kfree)
kfree(dev->managed.final_kfree);
}
/**
@ -946,6 +910,11 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
struct drm_driver *driver = dev->driver;
int ret;
if (!driver->load)
drm_mode_config_validate(dev);
WARN_ON(!dev->managed.final_kfree);
if (drm_dev_needs_global_mutex(dev))
mutex_lock(&drm_global_mutex);
@ -1046,8 +1015,8 @@ EXPORT_SYMBOL(drm_dev_unregister);
*/
int drm_dev_set_unique(struct drm_device *dev, const char *name)
{
kfree(dev->unique);
dev->unique = kstrdup(name, GFP_KERNEL);
drmm_kfree(dev, dev->unique);
dev->unique = drmm_kstrdup(dev, name, GFP_KERNEL);
return dev->unique ? 0 : -ENOMEM;
}

View File

@ -1583,8 +1583,6 @@ module_param_named(edid_fixup, edid_fixup, int, 0400);
MODULE_PARM_DESC(edid_fixup,
"Minimum number of valid EDID header bytes (0-8, default 6)");
static void drm_get_displayid(struct drm_connector *connector,
struct edid *edid);
static int validate_displayid(u8 *displayid, int length, int idx);
static int drm_edid_block_checksum(const u8 *raw_edid)
@ -2018,18 +2016,13 @@ EXPORT_SYMBOL(drm_probe_ddc);
struct edid *drm_get_edid(struct drm_connector *connector,
struct i2c_adapter *adapter)
{
struct edid *edid;
if (connector->force == DRM_FORCE_OFF)
return NULL;
if (connector->force == DRM_FORCE_UNSPECIFIED && !drm_probe_ddc(adapter))
return NULL;
edid = drm_do_get_edid(connector, drm_do_probe_ddc_edid, adapter);
if (edid)
drm_get_displayid(connector, edid);
return edid;
return drm_do_get_edid(connector, drm_do_probe_ddc_edid, adapter);
}
EXPORT_SYMBOL(drm_get_edid);
@ -3212,16 +3205,33 @@ static u8 *drm_find_edid_extension(const struct edid *edid, int ext_id)
}
static u8 *drm_find_displayid_extension(const struct edid *edid)
static u8 *drm_find_displayid_extension(const struct edid *edid,
int *length, int *idx)
{
return drm_find_edid_extension(edid, DISPLAYID_EXT);
u8 *displayid = drm_find_edid_extension(edid, DISPLAYID_EXT);
struct displayid_hdr *base;
int ret;
if (!displayid)
return NULL;
/* EDID extensions block checksum isn't for us */
*length = EDID_LENGTH - 1;
*idx = 1;
ret = validate_displayid(displayid, *length, *idx);
if (ret)
return NULL;
base = (struct displayid_hdr *)&displayid[*idx];
*length = *idx + sizeof(*base) + base->bytes;
return displayid;
}
static u8 *drm_find_cea_extension(const struct edid *edid)
{
int ret;
int idx = 1;
int length = EDID_LENGTH;
int length, idx;
struct displayid_block *block;
u8 *cea;
u8 *displayid;
@ -3232,14 +3242,10 @@ static u8 *drm_find_cea_extension(const struct edid *edid)
return cea;
/* CEA blocks can also be found embedded in a DisplayID block */
displayid = drm_find_displayid_extension(edid);
displayid = drm_find_displayid_extension(edid, &length, &idx);
if (!displayid)
return NULL;
ret = validate_displayid(displayid, length, idx);
if (ret)
return NULL;
idx += sizeof(struct displayid_hdr);
for_each_displayid_db(displayid, block, idx, length) {
if (block->tag == DATA_BLOCK_CTA) {
@ -5084,7 +5090,7 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
static int validate_displayid(u8 *displayid, int length, int idx)
{
int i;
int i, dispid_length;
u8 csum = 0;
struct displayid_hdr *base;
@ -5093,15 +5099,18 @@ static int validate_displayid(u8 *displayid, int length, int idx)
DRM_DEBUG_KMS("base revision 0x%x, length %d, %d %d\n",
base->rev, base->bytes, base->prod_id, base->ext_count);
if (base->bytes + 5 > length - idx)
/* +1 for DispID checksum */
dispid_length = sizeof(*base) + base->bytes + 1;
if (dispid_length > length - idx)
return -EINVAL;
for (i = idx; i <= base->bytes + 5; i++) {
csum += displayid[i];
}
for (i = 0; i < dispid_length; i++)
csum += displayid[idx + i];
if (csum) {
DRM_NOTE("DisplayID checksum invalid, remainder is %d\n", csum);
return -EINVAL;
}
return 0;
}
@ -5180,20 +5189,14 @@ static int add_displayid_detailed_modes(struct drm_connector *connector,
struct edid *edid)
{
u8 *displayid;
int ret;
int idx = 1;
int length = EDID_LENGTH;
int length, idx;
struct displayid_block *block;
int num_modes = 0;
displayid = drm_find_displayid_extension(edid);
displayid = drm_find_displayid_extension(edid, &length, &idx);
if (!displayid)
return 0;
ret = validate_displayid(displayid, length, idx);
if (ret)
return 0;
idx += sizeof(struct displayid_hdr);
for_each_displayid_db(displayid, block, idx, length) {
switch (block->tag) {
@ -5782,9 +5785,9 @@ drm_hdmi_vendor_infoframe_from_display_mode(struct hdmi_vendor_infoframe *frame,
EXPORT_SYMBOL(drm_hdmi_vendor_infoframe_from_display_mode);
static int drm_parse_tiled_block(struct drm_connector *connector,
struct displayid_block *block)
const struct displayid_block *block)
{
struct displayid_tiled_block *tile = (struct displayid_tiled_block *)block;
const struct displayid_tiled_block *tile = (struct displayid_tiled_block *)block;
u16 w, h;
u8 tile_v_loc, tile_h_loc;
u8 num_v_tile, num_h_tile;
@ -5835,22 +5838,12 @@ static int drm_parse_tiled_block(struct drm_connector *connector,
return 0;
}
static int drm_parse_display_id(struct drm_connector *connector,
u8 *displayid, int length,
bool is_edid_extension)
static int drm_displayid_parse_tiled(struct drm_connector *connector,
const u8 *displayid, int length, int idx)
{
/* if this is an EDID extension the first byte will be 0x70 */
int idx = 0;
struct displayid_block *block;
const struct displayid_block *block;
int ret;
if (is_edid_extension)
idx = 1;
ret = validate_displayid(displayid, length, idx);
if (ret)
return ret;
idx += sizeof(struct displayid_hdr);
for_each_displayid_db(displayid, block, idx, length) {
DRM_DEBUG_KMS("block id 0x%x, rev %d, len %d\n",
@ -5862,12 +5855,6 @@ static int drm_parse_display_id(struct drm_connector *connector,
if (ret)
return ret;
break;
case DATA_BLOCK_TYPE_1_DETAILED_TIMING:
/* handled in mode gathering code. */
break;
case DATA_BLOCK_CTA:
/* handled in the cea parser code. */
break;
default:
DRM_DEBUG_KMS("found DisplayID tag 0x%x, unhandled\n", block->tag);
break;
@ -5876,19 +5863,21 @@ static int drm_parse_display_id(struct drm_connector *connector,
return 0;
}
static void drm_get_displayid(struct drm_connector *connector,
struct edid *edid)
void drm_update_tile_info(struct drm_connector *connector,
const struct edid *edid)
{
void *displayid = NULL;
const void *displayid = NULL;
int length, idx;
int ret;
connector->has_tile = false;
displayid = drm_find_displayid_extension(edid);
displayid = drm_find_displayid_extension(edid, &length, &idx);
if (!displayid) {
/* drop reference to any tile group we had */
goto out_drop_ref;
}
ret = drm_parse_display_id(connector, displayid, EDID_LENGTH, true);
ret = drm_displayid_parse_tiled(connector, displayid, length, idx);
if (ret < 0)
goto out_drop_ref;
if (!connector->has_tile)

View File

@ -514,6 +514,14 @@ struct fb_info *drm_fb_helper_alloc_fbi(struct drm_fb_helper *fb_helper)
if (ret)
goto err_release;
/*
* TODO: We really should be smarter here and alloc an apperture
* for each IORESOURCE_MEM resource helper->dev->dev has and also
* init the ranges of the appertures based on the resources.
* Note some drivers currently count on there being only 1 empty
* aperture and fill this themselves, these will need to be dealt
* with somehow when fixing this.
*/
info->apertures = alloc_apertures(1);
if (!info->apertures) {
ret = -ENOMEM;
@ -2162,6 +2170,8 @@ static const struct drm_client_funcs drm_fbdev_client_funcs = {
*
* This function sets up generic fbdev emulation for drivers that supports
* dumb buffers with a virtual address and that can be mmap'ed.
* drm_fbdev_generic_setup() shall be called after the DRM driver registered
* the new DRM device with drm_dev_register().
*
* Restore, hotplug events and teardown are all taken care of. Drivers that do
* suspend/resume need to call drm_fb_helper_set_suspend_unlocked() themselves.
@ -2178,29 +2188,30 @@ static const struct drm_client_funcs drm_fbdev_client_funcs = {
* Setup will be retried on the next hotplug event.
*
* The fbdev is destroyed by drm_dev_unregister().
*
* Returns:
* Zero on success or negative error code on failure.
*/
int drm_fbdev_generic_setup(struct drm_device *dev, unsigned int preferred_bpp)
void drm_fbdev_generic_setup(struct drm_device *dev,
unsigned int preferred_bpp)
{
struct drm_fb_helper *fb_helper;
int ret;
WARN(dev->fb_helper, "fb_helper is already set!\n");
drm_WARN(dev, !dev->registered, "Device has not been registered.\n");
drm_WARN(dev, dev->fb_helper, "fb_helper is already set!\n");
if (!drm_fbdev_emulation)
return 0;
return;
fb_helper = kzalloc(sizeof(*fb_helper), GFP_KERNEL);
if (!fb_helper)
return -ENOMEM;
if (!fb_helper) {
drm_err(dev, "Failed to allocate fb_helper\n");
return;
}
ret = drm_client_init(dev, &fb_helper->client, "fbdev", &drm_fbdev_client_funcs);
if (ret) {
kfree(fb_helper);
drm_err(dev, "Failed to register client: %d\n", ret);
return ret;
return;
}
if (!preferred_bpp)
@ -2214,8 +2225,6 @@ int drm_fbdev_generic_setup(struct drm_device *dev, unsigned int preferred_bpp)
drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
drm_client_register(&fb_helper->client);
return 0;
}
EXPORT_SYMBOL(drm_fbdev_generic_setup);

View File

@ -1207,10 +1207,10 @@ static const struct drm_info_list drm_framebuffer_debugfs_list[] = {
{ "framebuffer", drm_framebuffer_info, 0 },
};
int drm_framebuffer_debugfs_init(struct drm_minor *minor)
void drm_framebuffer_debugfs_init(struct drm_minor *minor)
{
return drm_debugfs_create_files(drm_framebuffer_debugfs_list,
ARRAY_SIZE(drm_framebuffer_debugfs_list),
minor->debugfs_root, minor);
drm_debugfs_create_files(drm_framebuffer_debugfs_list,
ARRAY_SIZE(drm_framebuffer_debugfs_list),
minor->debugfs_root, minor);
}
#endif

View File

@ -44,6 +44,7 @@
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_gem.h>
#include <drm/drm_managed.h>
#include <drm/drm_print.h>
#include <drm/drm_vma_manager.h>
@ -77,6 +78,12 @@
* up at a later date, and as our interface with shmfs for memory allocation.
*/
static void
drm_gem_init_release(struct drm_device *dev, void *ptr)
{
drm_vma_offset_manager_destroy(dev->vma_offset_manager);
}
/**
* drm_gem_init - Initialize the GEM device fields
* @dev: drm_devic structure to initialize
@ -89,7 +96,8 @@ drm_gem_init(struct drm_device *dev)
mutex_init(&dev->object_name_lock);
idr_init_base(&dev->object_name_idr, 1);
vma_offset_manager = kzalloc(sizeof(*vma_offset_manager), GFP_KERNEL);
vma_offset_manager = drmm_kzalloc(dev, sizeof(*vma_offset_manager),
GFP_KERNEL);
if (!vma_offset_manager) {
DRM_ERROR("out of memory\n");
return -ENOMEM;
@ -100,16 +108,7 @@ drm_gem_init(struct drm_device *dev)
DRM_FILE_PAGE_OFFSET_START,
DRM_FILE_PAGE_OFFSET_SIZE);
return 0;
}
void
drm_gem_destroy(struct drm_device *dev)
{
drm_vma_offset_manager_destroy(dev->vma_offset_manager);
kfree(dev->vma_offset_manager);
dev->vma_offset_manager = NULL;
return drmm_add_action(dev, drm_gem_init_release, NULL);
}
/**
@ -432,7 +431,7 @@ err_unref:
* drm_gem_handle_create - create a gem handle for an object
* @file_priv: drm file-private structure to register the handle for
* @obj: object to register
* @handlep: pionter to return the created handle to the caller
* @handlep: pointer to return the created handle to the caller
*
* Create a handle for this object. This adds a handle reference to the object,
* which includes a regular reference count. Callers will likely want to

View File

@ -21,6 +21,13 @@
#include <drm/drm_modeset_helper.h>
#include <drm/drm_simple_kms_helper.h>
#define AFBC_HEADER_SIZE 16
#define AFBC_TH_LAYOUT_ALIGNMENT 8
#define AFBC_HDR_ALIGN 64
#define AFBC_SUPERBLOCK_PIXELS 256
#define AFBC_SUPERBLOCK_ALIGNMENT 128
#define AFBC_TH_BODY_START_ALIGNMENT 4096
/**
* DOC: overview
*
@ -54,19 +61,15 @@ struct drm_gem_object *drm_gem_fb_get_obj(struct drm_framebuffer *fb,
}
EXPORT_SYMBOL_GPL(drm_gem_fb_get_obj);
static struct drm_framebuffer *
drm_gem_fb_alloc(struct drm_device *dev,
static int
drm_gem_fb_init(struct drm_device *dev,
struct drm_framebuffer *fb,
const struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_gem_object **obj, unsigned int num_planes,
const struct drm_framebuffer_funcs *funcs)
{
struct drm_framebuffer *fb;
int ret, i;
fb = kzalloc(sizeof(*fb), GFP_KERNEL);
if (!fb)
return ERR_PTR(-ENOMEM);
drm_helper_mode_fill_fb_struct(dev, fb, mode_cmd);
for (i = 0; i < num_planes; i++)
@ -76,10 +79,9 @@ drm_gem_fb_alloc(struct drm_device *dev,
if (ret) {
drm_err(dev, "Failed to init framebuffer: %d\n", ret);
kfree(fb);
return ERR_PTR(ret);
}
return fb;
return ret;
}
/**
@ -123,10 +125,13 @@ int drm_gem_fb_create_handle(struct drm_framebuffer *fb, struct drm_file *file,
EXPORT_SYMBOL(drm_gem_fb_create_handle);
/**
* drm_gem_fb_create_with_funcs() - Helper function for the
* &drm_mode_config_funcs.fb_create
* callback
* drm_gem_fb_init_with_funcs() - Helper function for implementing
* &drm_mode_config_funcs.fb_create
* callback in cases when the driver
* allocates a subclass of
* struct drm_framebuffer
* @dev: DRM device
* @fb: framebuffer object
* @file: DRM file that holds the GEM handle(s) backing the framebuffer
* @mode_cmd: Metadata from the userspace framebuffer creation request
* @funcs: vtable to be used for the new framebuffer object
@ -134,23 +139,26 @@ EXPORT_SYMBOL(drm_gem_fb_create_handle);
* This function can be used to set &drm_framebuffer_funcs for drivers that need
* custom framebuffer callbacks. Use drm_gem_fb_create() if you don't need to
* change &drm_framebuffer_funcs. The function does buffer size validation.
* The buffer size validation is for a general case, though, so users should
* pay attention to the checks being appropriate for them or, at least,
* non-conflicting.
*
* Returns:
* Pointer to a &drm_framebuffer on success or an error pointer on failure.
* Zero or a negative error code.
*/
struct drm_framebuffer *
drm_gem_fb_create_with_funcs(struct drm_device *dev, struct drm_file *file,
const struct drm_mode_fb_cmd2 *mode_cmd,
const struct drm_framebuffer_funcs *funcs)
int drm_gem_fb_init_with_funcs(struct drm_device *dev,
struct drm_framebuffer *fb,
struct drm_file *file,
const struct drm_mode_fb_cmd2 *mode_cmd,
const struct drm_framebuffer_funcs *funcs)
{
const struct drm_format_info *info;
struct drm_gem_object *objs[4];
struct drm_framebuffer *fb;
int ret, i;
info = drm_get_format_info(dev, mode_cmd);
if (!info)
return ERR_PTR(-EINVAL);
return -EINVAL;
for (i = 0; i < info->num_planes; i++) {
unsigned int width = mode_cmd->width / (i ? info->hsub : 1);
@ -175,19 +183,55 @@ drm_gem_fb_create_with_funcs(struct drm_device *dev, struct drm_file *file,
}
}
fb = drm_gem_fb_alloc(dev, mode_cmd, objs, i, funcs);
if (IS_ERR(fb)) {
ret = PTR_ERR(fb);
ret = drm_gem_fb_init(dev, fb, mode_cmd, objs, i, funcs);
if (ret)
goto err_gem_object_put;
}
return fb;
return 0;
err_gem_object_put:
for (i--; i >= 0; i--)
drm_gem_object_put_unlocked(objs[i]);
return ERR_PTR(ret);
return ret;
}
EXPORT_SYMBOL_GPL(drm_gem_fb_init_with_funcs);
/**
* drm_gem_fb_create_with_funcs() - Helper function for the
* &drm_mode_config_funcs.fb_create
* callback
* @dev: DRM device
* @file: DRM file that holds the GEM handle(s) backing the framebuffer
* @mode_cmd: Metadata from the userspace framebuffer creation request
* @funcs: vtable to be used for the new framebuffer object
*
* This function can be used to set &drm_framebuffer_funcs for drivers that need
* custom framebuffer callbacks. Use drm_gem_fb_create() if you don't need to
* change &drm_framebuffer_funcs. The function does buffer size validation.
*
* Returns:
* Pointer to a &drm_framebuffer on success or an error pointer on failure.
*/
struct drm_framebuffer *
drm_gem_fb_create_with_funcs(struct drm_device *dev, struct drm_file *file,
const struct drm_mode_fb_cmd2 *mode_cmd,
const struct drm_framebuffer_funcs *funcs)
{
struct drm_framebuffer *fb;
int ret;
fb = kzalloc(sizeof(*fb), GFP_KERNEL);
if (!fb)
return ERR_PTR(-ENOMEM);
ret = drm_gem_fb_init_with_funcs(dev, fb, file, mode_cmd, funcs);
if (ret) {
kfree(fb);
return ERR_PTR(ret);
}
return fb;
}
EXPORT_SYMBOL_GPL(drm_gem_fb_create_with_funcs);
@ -265,6 +309,132 @@ drm_gem_fb_create_with_dirty(struct drm_device *dev, struct drm_file *file,
}
EXPORT_SYMBOL_GPL(drm_gem_fb_create_with_dirty);
static __u32 drm_gem_afbc_get_bpp(struct drm_device *dev,
const struct drm_mode_fb_cmd2 *mode_cmd)
{
const struct drm_format_info *info;
info = drm_get_format_info(dev, mode_cmd);
/* use whatever a driver has set */
if (info->cpp[0])
return info->cpp[0] * 8;
/* guess otherwise */
switch (info->format) {
case DRM_FORMAT_YUV420_8BIT:
return 12;
case DRM_FORMAT_YUV420_10BIT:
return 15;
case DRM_FORMAT_VUY101010:
return 30;
default:
break;
}
/* all attempts failed */
return 0;
}
static int drm_gem_afbc_min_size(struct drm_device *dev,
const struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_afbc_framebuffer *afbc_fb)
{
__u32 n_blocks, w_alignment, h_alignment, hdr_alignment;
/* remove bpp when all users properly encode cpp in drm_format_info */
__u32 bpp;
switch (mode_cmd->modifier[0] & AFBC_FORMAT_MOD_BLOCK_SIZE_MASK) {
case AFBC_FORMAT_MOD_BLOCK_SIZE_16x16:
afbc_fb->block_width = 16;
afbc_fb->block_height = 16;
break;
case AFBC_FORMAT_MOD_BLOCK_SIZE_32x8:
afbc_fb->block_width = 32;
afbc_fb->block_height = 8;
break;
/* no user exists yet - fall through */
case AFBC_FORMAT_MOD_BLOCK_SIZE_64x4:
case AFBC_FORMAT_MOD_BLOCK_SIZE_32x8_64x4:
default:
drm_dbg_kms(dev, "Invalid AFBC_FORMAT_MOD_BLOCK_SIZE: %lld.\n",
mode_cmd->modifier[0]
& AFBC_FORMAT_MOD_BLOCK_SIZE_MASK);
return -EINVAL;
}
/* tiled header afbc */
w_alignment = afbc_fb->block_width;
h_alignment = afbc_fb->block_height;
hdr_alignment = AFBC_HDR_ALIGN;
if (mode_cmd->modifier[0] & AFBC_FORMAT_MOD_TILED) {
w_alignment *= AFBC_TH_LAYOUT_ALIGNMENT;
h_alignment *= AFBC_TH_LAYOUT_ALIGNMENT;
hdr_alignment = AFBC_TH_BODY_START_ALIGNMENT;
}
afbc_fb->aligned_width = ALIGN(mode_cmd->width, w_alignment);
afbc_fb->aligned_height = ALIGN(mode_cmd->height, h_alignment);
afbc_fb->offset = mode_cmd->offsets[0];
bpp = drm_gem_afbc_get_bpp(dev, mode_cmd);
if (!bpp) {
drm_dbg_kms(dev, "Invalid AFBC bpp value: %d\n", bpp);
return -EINVAL;
}
n_blocks = (afbc_fb->aligned_width * afbc_fb->aligned_height)
/ AFBC_SUPERBLOCK_PIXELS;
afbc_fb->afbc_size = ALIGN(n_blocks * AFBC_HEADER_SIZE, hdr_alignment);
afbc_fb->afbc_size += n_blocks * ALIGN(bpp * AFBC_SUPERBLOCK_PIXELS / 8,
AFBC_SUPERBLOCK_ALIGNMENT);
return 0;
}
/**
* drm_gem_fb_afbc_init() - Helper function for drivers using afbc to
* fill and validate all the afbc-specific
* struct drm_afbc_framebuffer members
*
* @dev: DRM device
* @afbc_fb: afbc-specific framebuffer
* @mode_cmd: Metadata from the userspace framebuffer creation request
* @afbc_fb: afbc framebuffer
*
* This function can be used by drivers which support afbc to complete
* the preparation of struct drm_afbc_framebuffer. It must be called after
* allocating the said struct and calling drm_gem_fb_init_with_funcs().
* It is caller's responsibility to put afbc_fb->base.obj objects in case
* the call is unsuccessful.
*
* Returns:
* Zero on success or a negative error value on failure.
*/
int drm_gem_fb_afbc_init(struct drm_device *dev,
const struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_afbc_framebuffer *afbc_fb)
{
const struct drm_format_info *info;
struct drm_gem_object **objs;
int ret;
objs = afbc_fb->base.obj;
info = drm_get_format_info(dev, mode_cmd);
if (!info)
return -EINVAL;
ret = drm_gem_afbc_min_size(dev, mode_cmd, afbc_fb);
if (ret < 0)
return ret;
if (objs[0]->size < afbc_fb->afbc_size)
return -EINVAL;
return 0;
}
EXPORT_SYMBOL_GPL(drm_gem_fb_afbc_init);
/**
* drm_gem_fb_prepare_fb() - Prepare a GEM backed framebuffer
* @plane: Plane

View File

@ -1,10 +1,13 @@
// SPDX-License-Identifier: GPL-2.0-or-later
#include <linux/module.h>
#include <drm/drm_debugfs.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_gem_ttm_helper.h>
#include <drm/drm_gem_vram_helper.h>
#include <drm/drm_mode.h>
@ -18,13 +21,93 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
/**
* DOC: overview
*
* This library provides a GEM buffer object that is backed by video RAM
* (VRAM). It can be used for framebuffer devices with dedicated memory.
* This library provides &struct drm_gem_vram_object (GEM VRAM), a GEM
* buffer object that is backed by video RAM (VRAM). It can be used for
* framebuffer devices with dedicated memory.
*
* The data structure &struct drm_vram_mm and its helpers implement a memory
* manager for simple framebuffer devices with dedicated video memory. Buffer
* objects are either placed in video RAM or evicted to system memory. The rsp.
* buffer object is provided by &struct drm_gem_vram_object.
* manager for simple framebuffer devices with dedicated video memory. GEM
* VRAM buffer objects are either placed in the video memory or remain evicted
* to system memory.
*
* With the GEM interface userspace applications create, manage and destroy
* graphics buffers, such as an on-screen framebuffer. GEM does not provide
* an implementation of these interfaces. It's up to the DRM driver to
* provide an implementation that suits the hardware. If the hardware device
* contains dedicated video memory, the DRM driver can use the VRAM helper
* library. Each active buffer object is stored in video RAM. Active
* buffer are used for drawing the current frame, typically something like
* the frame's scanout buffer or the cursor image. If there's no more space
* left in VRAM, inactive GEM objects can be moved to system memory.
*
* The easiest way to use the VRAM helper library is to call
* drm_vram_helper_alloc_mm(). The function allocates and initializes an
* instance of &struct drm_vram_mm in &struct drm_device.vram_mm . Use
* &DRM_GEM_VRAM_DRIVER to initialize &struct drm_driver and
* &DRM_VRAM_MM_FILE_OPERATIONS to initialize &struct file_operations;
* as illustrated below.
*
* .. code-block:: c
*
* struct file_operations fops ={
* .owner = THIS_MODULE,
* DRM_VRAM_MM_FILE_OPERATION
* };
* struct drm_driver drv = {
* .driver_feature = DRM_ ... ,
* .fops = &fops,
* DRM_GEM_VRAM_DRIVER
* };
*
* int init_drm_driver()
* {
* struct drm_device *dev;
* uint64_t vram_base;
* unsigned long vram_size;
* int ret;
*
* // setup device, vram base and size
* // ...
*
* ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size);
* if (ret)
* return ret;
* return 0;
* }
*
* This creates an instance of &struct drm_vram_mm, exports DRM userspace
* interfaces for GEM buffer management and initializes file operations to
* allow for accessing created GEM buffers. With this setup, the DRM driver
* manages an area of video RAM with VRAM MM and provides GEM VRAM objects
* to userspace.
*
* To clean up the VRAM memory management, call drm_vram_helper_release_mm()
* in the driver's clean-up code.
*
* .. code-block:: c
*
* void fini_drm_driver()
* {
* struct drm_device *dev = ...;
*
* drm_vram_helper_release_mm(dev);
* }
*
* For drawing or scanout operations, buffer object have to be pinned in video
* RAM. Call drm_gem_vram_pin() with &DRM_GEM_VRAM_PL_FLAG_VRAM or
* &DRM_GEM_VRAM_PL_FLAG_SYSTEM to pin a buffer object in video RAM or system
* memory. Call drm_gem_vram_unpin() to release the pinned object afterwards.
*
* A buffer object that is pinned in video RAM has a fixed address within that
* memory region. Call drm_gem_vram_offset() to retrieve this value. Typically
* it's used to program the hardware's scanout engine for framebuffers, set
* the cursor overlay's image for a mouse cursor, or use it as input to the
* hardware's draing engine.
*
* To access a buffer object's memory from the DRM driver, call
* drm_gem_vram_kmap(). It (optionally) maps the buffer into kernel address
* space and returns the memory address. Use drm_gem_vram_kunmap() to
* release the mapping.
*/
/*
@ -670,9 +753,9 @@ EXPORT_SYMBOL(drm_gem_vram_driver_dumb_mmap_offset);
* @plane: a DRM plane
* @new_state: the plane's new state
*
* During plane updates, this function pins the GEM VRAM
* objects of the plane's new framebuffer to VRAM. Call
* drm_gem_vram_plane_helper_cleanup_fb() to unpin them.
* During plane updates, this function sets the plane's fence and
* pins the GEM VRAM objects of the plane's new framebuffer to VRAM.
* Call drm_gem_vram_plane_helper_cleanup_fb() to unpin them.
*
* Returns:
* 0 on success, or
@ -698,6 +781,10 @@ drm_gem_vram_plane_helper_prepare_fb(struct drm_plane *plane,
goto err_drm_gem_vram_unpin;
}
ret = drm_gem_fb_prepare_fb(plane, new_state);
if (ret)
goto err_drm_gem_vram_unpin;
return 0;
err_drm_gem_vram_unpin:
@ -1018,7 +1105,6 @@ static struct ttm_bo_driver bo_driver = {
* struct drm_vram_mm
*/
#if defined(CONFIG_DEBUG_FS)
static int drm_vram_mm_debugfs(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
@ -1035,27 +1121,18 @@ static int drm_vram_mm_debugfs(struct seq_file *m, void *data)
static const struct drm_info_list drm_vram_mm_debugfs_list[] = {
{ "vram-mm", drm_vram_mm_debugfs, 0, NULL },
};
#endif
/**
* drm_vram_mm_debugfs_init() - Register VRAM MM debugfs file.
*
* @minor: drm minor device.
*
* Returns:
* 0 on success, or
* a negative error code otherwise.
*/
int drm_vram_mm_debugfs_init(struct drm_minor *minor)
void drm_vram_mm_debugfs_init(struct drm_minor *minor)
{
int ret = 0;
#if defined(CONFIG_DEBUG_FS)
ret = drm_debugfs_create_files(drm_vram_mm_debugfs_list,
ARRAY_SIZE(drm_vram_mm_debugfs_list),
minor->debugfs_root, minor);
#endif
return ret;
drm_debugfs_create_files(drm_vram_mm_debugfs_list,
ARRAY_SIZE(drm_vram_mm_debugfs_list),
minor->debugfs_root, minor);
}
EXPORT_SYMBOL(drm_vram_mm_debugfs_init);
@ -1202,3 +1279,6 @@ drm_vram_helper_mode_valid(struct drm_device *dev,
return drm_vram_helper_mode_valid_internal(dev, mode, max_bpp);
}
EXPORT_SYMBOL(drm_vram_helper_mode_valid);
MODULE_DESCRIPTION("DRM VRAM memory-management helpers");
MODULE_LICENSE("GPL");

View File

@ -89,9 +89,11 @@ void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpr
struct drm_minor *drm_minor_acquire(unsigned int minor_id);
void drm_minor_release(struct drm_minor *minor);
/* drm_managed.c */
void drm_managed_release(struct drm_device *dev);
/* drm_vblank.c */
void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe);
void drm_vblank_cleanup(struct drm_device *dev);
/* IOCTLS */
int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
@ -141,7 +143,6 @@ void drm_sysfs_lease_event(struct drm_device *dev);
/* drm_gem.c */
struct drm_gem_object;
int drm_gem_init(struct drm_device *dev);
void drm_gem_destroy(struct drm_device *dev);
int drm_gem_handle_create_tail(struct drm_file *file_priv,
struct drm_gem_object *obj,
u32 *handlep);
@ -235,4 +236,4 @@ int drm_syncobj_query_ioctl(struct drm_device *dev, void *data,
/* drm_framebuffer.c */
void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_framebuffer *fb);
int drm_framebuffer_debugfs_init(struct drm_minor *minor);
void drm_framebuffer_debugfs_init(struct drm_minor *minor);

View File

@ -599,8 +599,8 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_SET_SAREA_CTX, drm_legacy_setsareactx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_GET_SAREA_CTX, drm_legacy_getsareactx, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, 0),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_legacy_addctx, DRM_AUTH|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_legacy_rmctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),

View File

@ -0,0 +1,275 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020 Intel
*
* Based on drivers/base/devres.c
*/
#include <drm/drm_managed.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <drm/drm_device.h>
#include <drm/drm_print.h>
/**
* DOC: managed resources
*
* Inspired by struct &device managed resources, but tied to the lifetime of
* struct &drm_device, which can outlive the underlying physical device, usually
* when userspace has some open files and other handles to resources still open.
*
* Release actions can be added with drmm_add_action(), memory allocations can
* be done directly with drmm_kmalloc() and the related functions. Everything
* will be released on the final drm_dev_put() in reverse order of how the
* release actions have been added and memory has been allocated since driver
* loading started with drm_dev_init().
*
* Note that release actions and managed memory can also be added and removed
* during the lifetime of the driver, all the functions are fully concurrent
* safe. But it is recommended to use managed resources only for resources that
* change rarely, if ever, during the lifetime of the &drm_device instance.
*/
struct drmres_node {
struct list_head entry;
drmres_release_t release;
const char *name;
size_t size;
};
struct drmres {
struct drmres_node node;
/*
* Some archs want to perform DMA into kmalloc caches
* and need a guaranteed alignment larger than
* the alignment of a 64-bit integer.
* Thus we use ARCH_KMALLOC_MINALIGN here and get exactly the same
* buffer alignment as if it was allocated by plain kmalloc().
*/
u8 __aligned(ARCH_KMALLOC_MINALIGN) data[];
};
static void free_dr(struct drmres *dr)
{
kfree_const(dr->node.name);
kfree(dr);
}
void drm_managed_release(struct drm_device *dev)
{
struct drmres *dr, *tmp;
drm_dbg_drmres(dev, "drmres release begin\n");
list_for_each_entry_safe(dr, tmp, &dev->managed.resources, node.entry) {
drm_dbg_drmres(dev, "REL %p %s (%zu bytes)\n",
dr, dr->node.name, dr->node.size);
if (dr->node.release)
dr->node.release(dev, dr->node.size ? *(void **)&dr->data : NULL);
list_del(&dr->node.entry);
free_dr(dr);
}
drm_dbg_drmres(dev, "drmres release end\n");
}
/*
* Always inline so that kmalloc_track_caller tracks the actual interesting
* caller outside of drm_managed.c.
*/
static __always_inline struct drmres * alloc_dr(drmres_release_t release,
size_t size, gfp_t gfp, int nid)
{
size_t tot_size;
struct drmres *dr;
/* We must catch any near-SIZE_MAX cases that could overflow. */
if (unlikely(check_add_overflow(sizeof(*dr), size, &tot_size)))
return NULL;
dr = kmalloc_node_track_caller(tot_size, gfp, nid);
if (unlikely(!dr))
return NULL;
memset(dr, 0, offsetof(struct drmres, data));
INIT_LIST_HEAD(&dr->node.entry);
dr->node.release = release;
dr->node.size = size;
return dr;
}
static void del_dr(struct drm_device *dev, struct drmres *dr)
{
list_del_init(&dr->node.entry);
drm_dbg_drmres(dev, "DEL %p %s (%lu bytes)\n",
dr, dr->node.name, (unsigned long) dr->node.size);
}
static void add_dr(struct drm_device *dev, struct drmres *dr)
{
unsigned long flags;
spin_lock_irqsave(&dev->managed.lock, flags);
list_add(&dr->node.entry, &dev->managed.resources);
spin_unlock_irqrestore(&dev->managed.lock, flags);
drm_dbg_drmres(dev, "ADD %p %s (%lu bytes)\n",
dr, dr->node.name, (unsigned long) dr->node.size);
}
/**
* drmm_add_final_kfree - add release action for the final kfree()
* @dev: DRM device
* @container: pointer to the kmalloc allocation containing @dev
*
* Since the allocation containing the struct &drm_device must be allocated
* before it can be initialized with drm_dev_init() there's no way to allocate
* that memory with drmm_kmalloc(). To side-step this chicken-egg problem the
* pointer for this final kfree() must be specified by calling this function. It
* will be released in the final drm_dev_put() for @dev, after all other release
* actions installed through drmm_add_action() have been processed.
*/
void drmm_add_final_kfree(struct drm_device *dev, void *container)
{
WARN_ON(dev->managed.final_kfree);
WARN_ON(dev < (struct drm_device *) container);
WARN_ON(dev + 1 > (struct drm_device *) (container + ksize(container)));
dev->managed.final_kfree = container;
}
EXPORT_SYMBOL(drmm_add_final_kfree);
int __drmm_add_action(struct drm_device *dev,
drmres_release_t action,
void *data, const char *name)
{
struct drmres *dr;
void **void_ptr;
dr = alloc_dr(action, data ? sizeof(void*) : 0,
GFP_KERNEL | __GFP_ZERO,
dev_to_node(dev->dev));
if (!dr) {
drm_dbg_drmres(dev, "failed to add action %s for %p\n",
name, data);
return -ENOMEM;
}
dr->node.name = kstrdup_const(name, GFP_KERNEL);
if (data) {
void_ptr = (void **)&dr->data;
*void_ptr = data;
}
add_dr(dev, dr);
return 0;
}
EXPORT_SYMBOL(__drmm_add_action);
int __drmm_add_action_or_reset(struct drm_device *dev,
drmres_release_t action,
void *data, const char *name)
{
int ret;
ret = __drmm_add_action(dev, action, data, name);
if (ret)
action(dev, data);
return ret;
}
EXPORT_SYMBOL(__drmm_add_action_or_reset);
/**
* drmm_kmalloc - &drm_device managed kmalloc()
* @dev: DRM device
* @size: size of the memory allocation
* @gfp: GFP allocation flags
*
* This is a &drm_device managed version of kmalloc(). The allocated memory is
* automatically freed on the final drm_dev_put(). Memory can also be freed
* before the final drm_dev_put() by calling drmm_kfree().
*/
void *drmm_kmalloc(struct drm_device *dev, size_t size, gfp_t gfp)
{
struct drmres *dr;
dr = alloc_dr(NULL, size, gfp, dev_to_node(dev->dev));
if (!dr) {
drm_dbg_drmres(dev, "failed to allocate %zu bytes, %u flags\n",
size, gfp);
return NULL;
}
dr->node.name = kstrdup_const("kmalloc", GFP_KERNEL);
add_dr(dev, dr);
return dr->data;
}
EXPORT_SYMBOL(drmm_kmalloc);
/**
* drmm_kstrdup - &drm_device managed kstrdup()
* @dev: DRM device
* @s: 0-terminated string to be duplicated
* @gfp: GFP allocation flags
*
* This is a &drm_device managed version of kstrdup(). The allocated memory is
* automatically freed on the final drm_dev_put() and works exactly like a
* memory allocation obtained by drmm_kmalloc().
*/
char *drmm_kstrdup(struct drm_device *dev, const char *s, gfp_t gfp)
{
size_t size;
char *buf;
if (!s)
return NULL;
size = strlen(s) + 1;
buf = drmm_kmalloc(dev, size, gfp);
if (buf)
memcpy(buf, s, size);
return buf;
}
EXPORT_SYMBOL_GPL(drmm_kstrdup);
/**
* drmm_kfree - &drm_device managed kfree()
* @dev: DRM device
* @data: memory allocation to be freed
*
* This is a &drm_device managed version of kfree() which can be used to
* release memory allocated through drmm_kmalloc() or any of its related
* functions before the final drm_dev_put() of @dev.
*/
void drmm_kfree(struct drm_device *dev, void *data)
{
struct drmres *dr_match = NULL, *dr;
unsigned long flags;
if (!data)
return;
spin_lock_irqsave(&dev->managed.lock, flags);
list_for_each_entry(dr, &dev->managed.resources, node.entry) {
if (dr->data == data) {
dr_match = dr;
del_dr(dev, dr_match);
break;
}
}
spin_unlock_irqrestore(&dev->managed.lock, flags);
if (WARN_ON(!dr_match))
return;
free_dr(dr_match);
}
EXPORT_SYMBOL(drmm_kfree);

View File

@ -169,7 +169,8 @@ int mipi_dbi_command_buf(struct mipi_dbi *dbi, u8 cmd, u8 *data, size_t len)
EXPORT_SYMBOL(mipi_dbi_command_buf);
/* This should only be used by mipi_dbi_command() */
int mipi_dbi_command_stackbuf(struct mipi_dbi *dbi, u8 cmd, u8 *data, size_t len)
int mipi_dbi_command_stackbuf(struct mipi_dbi *dbi, u8 cmd, const u8 *data,
size_t len)
{
u8 *buf;
int ret;
@ -510,6 +511,10 @@ int mipi_dbi_dev_init_with_formats(struct mipi_dbi_dev *dbidev,
if (!dbidev->dbi.command)
return -EINVAL;
ret = drmm_mode_config_init(drm);
if (ret)
return ret;
dbidev->tx_buf = devm_kmalloc(drm->dev, tx_buf_size, GFP_KERNEL);
if (!dbidev->tx_buf)
return -ENOMEM;
@ -578,26 +583,6 @@ int mipi_dbi_dev_init(struct mipi_dbi_dev *dbidev,
}
EXPORT_SYMBOL(mipi_dbi_dev_init);
/**
* mipi_dbi_release - DRM driver release helper
* @drm: DRM device
*
* This function finalizes and frees &mipi_dbi.
*
* Drivers can use this as their &drm_driver->release callback.
*/
void mipi_dbi_release(struct drm_device *drm)
{
struct mipi_dbi_dev *dbidev = drm_to_mipi_dbi_dev(drm);
DRM_DEBUG_DRIVER("\n");
drm_mode_config_cleanup(drm);
drm_dev_fini(drm);
kfree(dbidev);
}
EXPORT_SYMBOL(mipi_dbi_release);
/**
* mipi_dbi_hw_reset - Hardware reset of controller
* @dbi: MIPI DBI structure
@ -1308,10 +1293,8 @@ static const struct file_operations mipi_dbi_debugfs_command_fops = {
* controller or getting the read command values.
* Drivers can use this as their &drm_driver->debugfs_init callback.
*
* Returns:
* Zero on success, negative error code on failure.
*/
int mipi_dbi_debugfs_init(struct drm_minor *minor)
void mipi_dbi_debugfs_init(struct drm_minor *minor)
{
struct mipi_dbi_dev *dbidev = drm_to_mipi_dbi_dev(minor->dev);
umode_t mode = S_IFREG | S_IWUSR;
@ -1320,8 +1303,6 @@ int mipi_dbi_debugfs_init(struct drm_minor *minor)
mode |= S_IRUGO;
debugfs_create_file("command", mode, minor->debugfs_root, dbidev,
&mipi_dbi_debugfs_command_fops);
return 0;
}
EXPORT_SYMBOL(mipi_dbi_debugfs_init);

View File

@ -25,6 +25,7 @@
#include <drm/drm_drv.h>
#include <drm/drm_encoder.h>
#include <drm/drm_file.h>
#include <drm/drm_managed.h>
#include <drm/drm_mode_config.h>
#include <drm/drm_print.h>
#include <linux/dma-resv.h>
@ -373,8 +374,14 @@ static int drm_mode_create_standard_properties(struct drm_device *dev)
return 0;
}
static void drm_mode_config_init_release(struct drm_device *dev, void *ptr)
{
drm_mode_config_cleanup(dev);
}
/**
* drm_mode_config_init - initialize DRM mode_configuration structure
* drmm_mode_config_init - managed DRM mode_configuration structure
* initialization
* @dev: DRM device
*
* Initialize @dev's mode_config structure, used for tracking the graphics
@ -384,8 +391,12 @@ static int drm_mode_create_standard_properties(struct drm_device *dev)
* problem, since this should happen single threaded at init time. It is the
* driver's problem to ensure this guarantee.
*
* Cleanup is automatically handled through registering drm_mode_config_cleanup
* with drmm_add_action().
*
* Returns: 0 on success, negative error value on failure.
*/
void drm_mode_config_init(struct drm_device *dev)
int drmm_mode_config_init(struct drm_device *dev)
{
mutex_init(&dev->mode_config.mutex);
drm_modeset_lock_init(&dev->mode_config.connection_mutex);
@ -443,8 +454,11 @@ void drm_mode_config_init(struct drm_device *dev)
drm_modeset_acquire_fini(&modeset_ctx);
dma_resv_fini(&resv);
}
return drmm_add_action_or_reset(dev, drm_mode_config_init_release,
NULL);
}
EXPORT_SYMBOL(drm_mode_config_init);
EXPORT_SYMBOL(drmm_mode_config_init);
/**
* drm_mode_config_cleanup - free up DRM mode_config info
@ -456,6 +470,9 @@ EXPORT_SYMBOL(drm_mode_config_init);
* Note that since this /should/ happen single-threaded at driver/device
* teardown time, no locking is required. It's the driver's job to ensure that
* this guarantee actually holds true.
*
* FIXME: With the managed drmm_mode_config_init() it is no longer necessary for
* drivers to explicitly call this function.
*/
void drm_mode_config_cleanup(struct drm_device *dev)
{
@ -532,3 +549,90 @@ void drm_mode_config_cleanup(struct drm_device *dev)
drm_modeset_lock_fini(&dev->mode_config.connection_mutex);
}
EXPORT_SYMBOL(drm_mode_config_cleanup);
static u32 full_encoder_mask(struct drm_device *dev)
{
struct drm_encoder *encoder;
u32 encoder_mask = 0;
drm_for_each_encoder(encoder, dev)
encoder_mask |= drm_encoder_mask(encoder);
return encoder_mask;
}
/*
* For some reason we want the encoder itself included in
* possible_clones. Make life easy for drivers by allowing them
* to leave possible_clones unset if no cloning is possible.
*/
static void fixup_encoder_possible_clones(struct drm_encoder *encoder)
{
if (encoder->possible_clones == 0)
encoder->possible_clones = drm_encoder_mask(encoder);
}
static void validate_encoder_possible_clones(struct drm_encoder *encoder)
{
struct drm_device *dev = encoder->dev;
u32 encoder_mask = full_encoder_mask(dev);
struct drm_encoder *other;
drm_for_each_encoder(other, dev) {
WARN(!!(encoder->possible_clones & drm_encoder_mask(other)) !=
!!(other->possible_clones & drm_encoder_mask(encoder)),
"possible_clones mismatch: "
"[ENCODER:%d:%s] mask=0x%x possible_clones=0x%x vs. "
"[ENCODER:%d:%s] mask=0x%x possible_clones=0x%x\n",
encoder->base.id, encoder->name,
drm_encoder_mask(encoder), encoder->possible_clones,
other->base.id, other->name,
drm_encoder_mask(other), other->possible_clones);
}
WARN((encoder->possible_clones & drm_encoder_mask(encoder)) == 0 ||
(encoder->possible_clones & ~encoder_mask) != 0,
"Bogus possible_clones: "
"[ENCODER:%d:%s] possible_clones=0x%x (full encoder mask=0x%x)\n",
encoder->base.id, encoder->name,
encoder->possible_clones, encoder_mask);
}
static u32 full_crtc_mask(struct drm_device *dev)
{
struct drm_crtc *crtc;
u32 crtc_mask = 0;
drm_for_each_crtc(crtc, dev)
crtc_mask |= drm_crtc_mask(crtc);
return crtc_mask;
}
static void validate_encoder_possible_crtcs(struct drm_encoder *encoder)
{
u32 crtc_mask = full_crtc_mask(encoder->dev);
WARN((encoder->possible_crtcs & crtc_mask) == 0 ||
(encoder->possible_crtcs & ~crtc_mask) != 0,
"Bogus possible_crtcs: "
"[ENCODER:%d:%s] possible_crtcs=0x%x (full crtc mask=0x%x)\n",
encoder->base.id, encoder->name,
encoder->possible_crtcs, crtc_mask);
}
void drm_mode_config_validate(struct drm_device *dev)
{
struct drm_encoder *encoder;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return;
drm_for_each_encoder(encoder, dev)
fixup_encoder_possible_clones(encoder);
drm_for_each_encoder(encoder, dev) {
validate_encoder_possible_clones(encoder);
validate_encoder_possible_crtcs(encoder);
}
}

View File

@ -30,12 +30,13 @@
#include <drm/drm.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_drv.h>
#include <drm/drm_pci.h>
#include <drm/drm_print.h>
#include "drm_internal.h"
#include "drm_legacy.h"
#ifdef CONFIG_DRM_LEGACY
/**
* drm_pci_alloc - Allocate a PCI consistent memory block, for DMA.
* @dev: DRM device
@ -93,6 +94,7 @@ void drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
}
EXPORT_SYMBOL(drm_pci_free);
#endif
static int drm_get_pci_domain(struct drm_device *dev)
{

View File

@ -30,6 +30,7 @@
#include <drm/drm_crtc.h>
#include <drm/drm_drv.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_managed.h>
#include <drm/drm_modeset_helper_vtables.h>
#include <drm/drm_print.h>
#include <drm/drm_vblank.h>
@ -40,6 +41,69 @@
/**
* DOC: vblank handling
*
* From the computer's perspective, every time the monitor displays
* a new frame the scanout engine has "scanned out" the display image
* from top to bottom, one row of pixels at a time. The current row
* of pixels is referred to as the current scanline.
*
* In addition to the display's visible area, there's usually a couple of
* extra scanlines which aren't actually displayed on the screen.
* These extra scanlines don't contain image data and are occasionally used
* for features like audio and infoframes. The region made up of these
* scanlines is referred to as the vertical blanking region, or vblank for
* short.
*
* For historical reference, the vertical blanking period was designed to
* give the electron gun (on CRTs) enough time to move back to the top of
* the screen to start scanning out the next frame. Similar for horizontal
* blanking periods. They were designed to give the electron gun enough
* time to move back to the other side of the screen to start scanning the
* next scanline.
*
* ::
*
*
* physical
* top of | |
* display | |
* | New frame |
* | |
* ||
* |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~| Scanline,
* || updates the
* | | frame as it
* | | travels down
* | | ("sacn out")
* | Old frame |
* | |
* | |
* | |
* | | physical
* | | bottom of
* vertical || display
* blanking xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
* region xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
* xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
* start of
* new frame
*
* "Physical top of display" is the reference point for the high-precision/
* corrected timestamp.
*
* On a lot of display hardware, programming needs to take effect during the
* vertical blanking period so that settings like gamma, the image buffer
* buffer to be scanned out, etc. can safely be changed without showing
* any visual artifacts on the screen. In some unforgiving hardware, some of
* this programming has to both start and end in the same vblank. To help
* with the timing of the hardware programming, an interrupt is usually
* available to notify the driver when it can start the updating of registers.
* The interrupt is in this context named the vblank interrupt.
*
* The vblank interrupt may be fired at different points depending on the
* hardware. Some hardware implementations will fire the interrupt when the
* new frame start, other implementations will fire the interrupt at different
* points in time.
*
* Vertical blanking plays a major role in graphics rendering. To achieve
* tear-free display, users must synchronize page flips and/or rendering to
* vertical blanking. The DRM API offers ioctls to perform page flips
@ -425,14 +489,10 @@ static void vblank_disable_fn(struct timer_list *t)
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
}
void drm_vblank_cleanup(struct drm_device *dev)
static void drm_vblank_init_release(struct drm_device *dev, void *ptr)
{
unsigned int pipe;
/* Bail if the driver didn't call drm_vblank_init() */
if (dev->num_crtcs == 0)
return;
for (pipe = 0; pipe < dev->num_crtcs; pipe++) {
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
@ -441,10 +501,6 @@ void drm_vblank_cleanup(struct drm_device *dev)
del_timer_sync(&vblank->disable_timer);
}
kfree(dev->vblank);
dev->num_crtcs = 0;
}
/**
@ -453,25 +509,29 @@ void drm_vblank_cleanup(struct drm_device *dev)
* @num_crtcs: number of CRTCs supported by @dev
*
* This function initializes vblank support for @num_crtcs display pipelines.
* Cleanup is handled by the DRM core, or through calling drm_dev_fini() for
* drivers with a &drm_driver.release callback.
* Cleanup is handled automatically through a cleanup function added with
* drmm_add_action().
*
* Returns:
* Zero on success or a negative error code on failure.
*/
int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
{
int ret = -ENOMEM;
int ret;
unsigned int i;
spin_lock_init(&dev->vbl_lock);
spin_lock_init(&dev->vblank_time_lock);
dev->vblank = drmm_kcalloc(dev, num_crtcs, sizeof(*dev->vblank), GFP_KERNEL);
if (!dev->vblank)
return -ENOMEM;
dev->num_crtcs = num_crtcs;
dev->vblank = kcalloc(num_crtcs, sizeof(*dev->vblank), GFP_KERNEL);
if (!dev->vblank)
goto err;
ret = drmm_add_action(dev, drm_vblank_init_release, NULL);
if (ret)
return ret;
for (i = 0; i < num_crtcs; i++) {
struct drm_vblank_crtc *vblank = &dev->vblank[i];
@ -486,10 +546,6 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n");
return 0;
err:
dev->num_crtcs = 0;
return ret;
}
EXPORT_SYMBOL(drm_vblank_init);

View File

@ -595,8 +595,8 @@ static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
vma->vm_ops = &drm_vm_ops;
break;
}
fallthrough; /* to _DRM_FRAME_BUFFER... */
#endif
/* fall through - to _DRM_FRAME_BUFFER... */
case _DRM_FRAME_BUFFER:
case _DRM_REGISTERS:
offset = drm_core_get_reg_ofs(dev);
@ -621,7 +621,7 @@ static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
vma->vm_end - vma->vm_start, vma->vm_page_prot))
return -EAGAIN;
vma->vm_page_prot = drm_dma_prot(map->type, vma);
/* fall through - to _DRM_SHM */
fallthrough; /* to _DRM_SHM */
case _DRM_SHM:
vma->vm_ops = &drm_vm_shm_ops;
vma->vm_private_data = (void *)map;

View File

@ -1,94 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
#include <linux/module.h>
/**
* DOC: overview
*
* This library provides &struct drm_gem_vram_object (GEM VRAM), a GEM
* buffer object that is backed by video RAM. It can be used for
* framebuffer devices with dedicated memory. The video RAM is managed
* by &struct drm_vram_mm (VRAM MM).
*
* With the GEM interface userspace applications create, manage and destroy
* graphics buffers, such as an on-screen framebuffer. GEM does not provide
* an implementation of these interfaces. It's up to the DRM driver to
* provide an implementation that suits the hardware. If the hardware device
* contains dedicated video memory, the DRM driver can use the VRAM helper
* library. Each active buffer object is stored in video RAM. Active
* buffer are used for drawing the current frame, typically something like
* the frame's scanout buffer or the cursor image. If there's no more space
* left in VRAM, inactive GEM objects can be moved to system memory.
*
* The easiest way to use the VRAM helper library is to call
* drm_vram_helper_alloc_mm(). The function allocates and initializes an
* instance of &struct drm_vram_mm in &struct drm_device.vram_mm . Use
* &DRM_GEM_VRAM_DRIVER to initialize &struct drm_driver and
* &DRM_VRAM_MM_FILE_OPERATIONS to initialize &struct file_operations;
* as illustrated below.
*
* .. code-block:: c
*
* struct file_operations fops ={
* .owner = THIS_MODULE,
* DRM_VRAM_MM_FILE_OPERATION
* };
* struct drm_driver drv = {
* .driver_feature = DRM_ ... ,
* .fops = &fops,
* DRM_GEM_VRAM_DRIVER
* };
*
* int init_drm_driver()
* {
* struct drm_device *dev;
* uint64_t vram_base;
* unsigned long vram_size;
* int ret;
*
* // setup device, vram base and size
* // ...
*
* ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size);
* if (ret)
* return ret;
* return 0;
* }
*
* This creates an instance of &struct drm_vram_mm, exports DRM userspace
* interfaces for GEM buffer management and initializes file operations to
* allow for accessing created GEM buffers. With this setup, the DRM driver
* manages an area of video RAM with VRAM MM and provides GEM VRAM objects
* to userspace.
*
* To clean up the VRAM memory management, call drm_vram_helper_release_mm()
* in the driver's clean-up code.
*
* .. code-block:: c
*
* void fini_drm_driver()
* {
* struct drm_device *dev = ...;
*
* drm_vram_helper_release_mm(dev);
* }
*
* For drawing or scanout operations, buffer object have to be pinned in video
* RAM. Call drm_gem_vram_pin() with &DRM_GEM_VRAM_PL_FLAG_VRAM or
* &DRM_GEM_VRAM_PL_FLAG_SYSTEM to pin a buffer object in video RAM or system
* memory. Call drm_gem_vram_unpin() to release the pinned object afterwards.
*
* A buffer object that is pinned in video RAM has a fixed address within that
* memory region. Call drm_gem_vram_offset() to retrieve this value. Typically
* it's used to program the hardware's scanout engine for framebuffers, set
* the cursor overlay's image for a mouse cursor, or use it as input to the
* hardware's draing engine.
*
* To access a buffer object's memory from the DRM driver, call
* drm_gem_vram_kmap(). It (optionally) maps the buffer into kernel address
* space and returns the memory address. Use drm_gem_vram_kunmap() to
* release the mapping.
*/
MODULE_DESCRIPTION("DRM VRAM memory-management helpers");
MODULE_LICENSE("GPL");

View File

@ -231,21 +231,11 @@ static struct drm_info_list etnaviv_debugfs_list[] = {
{"ring", show_each_gpu, 0, etnaviv_ring_show},
};
static int etnaviv_debugfs_init(struct drm_minor *minor)
static void etnaviv_debugfs_init(struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
int ret;
ret = drm_debugfs_create_files(etnaviv_debugfs_list,
ARRAY_SIZE(etnaviv_debugfs_list),
minor->debugfs_root, minor);
if (ret) {
dev_err(dev->dev, "could not install etnaviv_debugfs_list\n");
return ret;
}
return ret;
drm_debugfs_create_files(etnaviv_debugfs_list,
ARRAY_SIZE(etnaviv_debugfs_list),
minor->debugfs_root, minor);
}
#endif

View File

@ -25,6 +25,7 @@
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include <drm/exynos_drm.h>
#include "exynos_drm_crtc.h"
@ -135,10 +136,6 @@ static const struct drm_encoder_helper_funcs exynos_dp_encoder_helper_funcs = {
.disable = exynos_dp_nop,
};
static const struct drm_encoder_funcs exynos_dp_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
static int exynos_dp_dt_parse_panel(struct exynos_dp_device *dp)
{
int ret;
@ -167,8 +164,7 @@ static int exynos_dp_bind(struct device *dev, struct device *master, void *data)
return ret;
}
drm_encoder_init(drm_dev, encoder, &exynos_dp_encoder_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(drm_dev, encoder, DRM_MODE_ENCODER_TMDS);
drm_encoder_helper_add(encoder, &exynos_dp_encoder_helper_funcs);

View File

@ -14,6 +14,7 @@
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include <video/of_videomode.h>
#include <video/videomode.h>
@ -149,10 +150,6 @@ static const struct drm_encoder_helper_funcs exynos_dpi_encoder_helper_funcs = {
.disable = exynos_dpi_disable,
};
static const struct drm_encoder_funcs exynos_dpi_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
enum {
FIMD_PORT_IN0,
FIMD_PORT_IN1,
@ -201,8 +198,7 @@ int exynos_dpi_bind(struct drm_device *dev, struct drm_encoder *encoder)
{
int ret;
drm_encoder_init(dev, encoder, &exynos_dpi_encoder_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_TMDS);
drm_encoder_helper_add(encoder, &exynos_dpi_encoder_helper_funcs);

View File

@ -30,6 +30,7 @@
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include "exynos_drm_crtc.h"
#include "exynos_drm_drv.h"
@ -1523,10 +1524,6 @@ static const struct drm_encoder_helper_funcs exynos_dsi_encoder_helper_funcs = {
.disable = exynos_dsi_disable,
};
static const struct drm_encoder_funcs exynos_dsi_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
MODULE_DEVICE_TABLE(of, exynos_dsi_of_match);
static int exynos_dsi_host_attach(struct mipi_dsi_host *host,
@ -1704,8 +1701,7 @@ static int exynos_dsi_bind(struct device *dev, struct device *master,
struct drm_bridge *in_bridge;
int ret;
drm_encoder_init(drm_dev, encoder, &exynos_dsi_encoder_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(drm_dev, encoder, DRM_MODE_ENCODER_TMDS);
drm_encoder_helper_add(encoder, &exynos_dsi_encoder_helper_funcs);

View File

@ -14,6 +14,7 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_edid.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include <drm/drm_vblank.h>
#include <drm/exynos_drm.h>
@ -369,10 +370,6 @@ static const struct drm_encoder_helper_funcs exynos_vidi_encoder_helper_funcs =
.disable = exynos_vidi_disable,
};
static const struct drm_encoder_funcs exynos_vidi_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
static int vidi_bind(struct device *dev, struct device *master, void *data)
{
struct vidi_context *ctx = dev_get_drvdata(dev);
@ -406,8 +403,7 @@ static int vidi_bind(struct device *dev, struct device *master, void *data)
return PTR_ERR(ctx->crtc);
}
drm_encoder_init(drm_dev, encoder, &exynos_vidi_encoder_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(drm_dev, encoder, DRM_MODE_ENCODER_TMDS);
drm_encoder_helper_add(encoder, &exynos_vidi_encoder_helper_funcs);

View File

@ -38,6 +38,7 @@
#include <drm/drm_edid.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include "exynos_drm_crtc.h"
#include "regs-hdmi.h"
@ -1559,10 +1560,6 @@ static const struct drm_encoder_helper_funcs exynos_hdmi_encoder_helper_funcs =
.disable = hdmi_disable,
};
static const struct drm_encoder_funcs exynos_hdmi_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
static void hdmi_audio_shutdown(struct device *dev, void *data)
{
struct hdmi_context *hdata = dev_get_drvdata(dev);
@ -1843,8 +1840,7 @@ static int hdmi_bind(struct device *dev, struct device *master, void *data)
hdata->phy_clk.enable = hdmiphy_clk_enable;
drm_encoder_init(drm_dev, encoder, &exynos_hdmi_encoder_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(drm_dev, encoder, DRM_MODE_ENCODER_TMDS);
drm_encoder_helper_add(encoder, &exynos_hdmi_encoder_helper_funcs);

View File

@ -13,19 +13,11 @@
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include "fsl_dcu_drm_drv.h"
#include "fsl_tcon.h"
static void fsl_dcu_drm_encoder_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
}
static const struct drm_encoder_funcs encoder_funcs = {
.destroy = fsl_dcu_drm_encoder_destroy,
};
int fsl_dcu_drm_encoder_create(struct fsl_dcu_drm_device *fsl_dev,
struct drm_crtc *crtc)
{
@ -38,8 +30,8 @@ int fsl_dcu_drm_encoder_create(struct fsl_dcu_drm_device *fsl_dev,
if (fsl_dev->tcon)
fsl_tcon_bypass_enable(fsl_dev->tcon);
ret = drm_encoder_init(fsl_dev->drm, encoder, &encoder_funcs,
DRM_MODE_ENCODER_LVDS, NULL);
ret = drm_simple_encoder_init(fsl_dev->drm, encoder,
DRM_MODE_ENCODER_LVDS);
if (ret < 0)
return ret;

View File

@ -28,6 +28,8 @@
#include <linux/i2c.h>
#include <linux/pm_runtime.h>
#include <drm/drm_simple_kms_helper.h>
#include "cdv_device.h"
#include "intel_bios.h"
#include "power.h"
@ -237,15 +239,6 @@ static const struct drm_connector_helper_funcs
.best_encoder = gma_best_encoder,
};
static void cdv_intel_crt_enc_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
}
static const struct drm_encoder_funcs cdv_intel_crt_enc_funcs = {
.destroy = cdv_intel_crt_enc_destroy,
};
void cdv_intel_crt_init(struct drm_device *dev,
struct psb_intel_mode_device *mode_dev)
{
@ -271,8 +264,7 @@ void cdv_intel_crt_init(struct drm_device *dev,
&cdv_intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
encoder = &gma_encoder->base;
drm_encoder_init(dev, encoder,
&cdv_intel_crt_enc_funcs, DRM_MODE_ENCODER_DAC, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_DAC);
gma_connector_attach_encoder(gma_connector, gma_encoder);

View File

@ -32,6 +32,7 @@
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_dp_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include "gma_display.h"
#include "psb_drv.h"
@ -1908,11 +1909,6 @@ cdv_intel_dp_destroy(struct drm_connector *connector)
kfree(connector);
}
static void cdv_intel_dp_encoder_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
}
static const struct drm_encoder_helper_funcs cdv_intel_dp_helper_funcs = {
.dpms = cdv_intel_dp_dpms,
.mode_fixup = cdv_intel_dp_mode_fixup,
@ -1935,11 +1931,6 @@ static const struct drm_connector_helper_funcs cdv_intel_dp_connector_helper_fun
.best_encoder = gma_best_encoder,
};
static const struct drm_encoder_funcs cdv_intel_dp_enc_funcs = {
.destroy = cdv_intel_dp_encoder_destroy,
};
static void cdv_intel_dp_add_properties(struct drm_connector *connector)
{
cdv_intel_attach_force_audio_property(connector);
@ -2016,8 +2007,7 @@ cdv_intel_dp_init(struct drm_device *dev, struct psb_intel_mode_device *mode_dev
encoder = &gma_encoder->base;
drm_connector_init(dev, connector, &cdv_intel_dp_connector_funcs, type);
drm_encoder_init(dev, encoder, &cdv_intel_dp_enc_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_TMDS);
gma_connector_attach_encoder(gma_connector, gma_encoder);
@ -2120,7 +2110,7 @@ cdv_intel_dp_init(struct drm_device *dev, struct psb_intel_mode_device *mode_dev
if (ret == 0) {
/* if this fails, presume the device is a ghost */
DRM_INFO("failed to retrieve link info, disabling eDP\n");
cdv_intel_dp_encoder_destroy(encoder);
drm_encoder_cleanup(encoder);
cdv_intel_dp_destroy(connector);
goto err_priv;
} else {

View File

@ -32,6 +32,7 @@
#include <drm/drm.h>
#include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/drm_simple_kms_helper.h>
#include "cdv_device.h"
#include "psb_drv.h"
@ -311,8 +312,7 @@ void cdv_hdmi_init(struct drm_device *dev,
&cdv_hdmi_connector_funcs,
DRM_MODE_CONNECTOR_DVID);
drm_encoder_init(dev, encoder, &psb_intel_lvds_enc_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_TMDS);
gma_connector_attach_encoder(gma_connector, gma_encoder);
gma_encoder->type = INTEL_OUTPUT_HDMI;

View File

@ -12,6 +12,8 @@
#include <linux/i2c.h>
#include <linux/pm_runtime.h>
#include <drm/drm_simple_kms_helper.h>
#include "cdv_device.h"
#include "intel_bios.h"
#include "power.h"
@ -499,16 +501,6 @@ static const struct drm_connector_funcs cdv_intel_lvds_connector_funcs = {
.destroy = cdv_intel_lvds_destroy,
};
static void cdv_intel_lvds_enc_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
}
static const struct drm_encoder_funcs cdv_intel_lvds_enc_funcs = {
.destroy = cdv_intel_lvds_enc_destroy,
};
/*
* Enumerate the child dev array parsed from VBT to check whether
* the LVDS is present.
@ -616,10 +608,7 @@ void cdv_intel_lvds_init(struct drm_device *dev,
&cdv_intel_lvds_connector_funcs,
DRM_MODE_CONNECTOR_LVDS);
drm_encoder_init(dev, encoder,
&cdv_intel_lvds_enc_funcs,
DRM_MODE_ENCODER_LVDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_LVDS);
gma_connector_attach_encoder(gma_connector, gma_encoder);
gma_encoder->type = INTEL_OUTPUT_LVDS;

View File

@ -577,31 +577,31 @@ static void psb_setup_outputs(struct drm_device *dev)
break;
case INTEL_OUTPUT_SDVO:
crtc_mask = dev_priv->ops->sdvo_mask;
clone_mask = (1 << INTEL_OUTPUT_SDVO);
clone_mask = 0;
break;
case INTEL_OUTPUT_LVDS:
crtc_mask = dev_priv->ops->lvds_mask;
clone_mask = (1 << INTEL_OUTPUT_LVDS);
crtc_mask = dev_priv->ops->lvds_mask;
clone_mask = 0;
break;
case INTEL_OUTPUT_MIPI:
crtc_mask = (1 << 0);
clone_mask = (1 << INTEL_OUTPUT_MIPI);
clone_mask = 0;
break;
case INTEL_OUTPUT_MIPI2:
crtc_mask = (1 << 2);
clone_mask = (1 << INTEL_OUTPUT_MIPI2);
clone_mask = 0;
break;
case INTEL_OUTPUT_HDMI:
crtc_mask = dev_priv->ops->hdmi_mask;
crtc_mask = dev_priv->ops->hdmi_mask;
clone_mask = (1 << INTEL_OUTPUT_HDMI);
break;
case INTEL_OUTPUT_DISPLAYPORT:
crtc_mask = (1 << 0) | (1 << 1);
clone_mask = (1 << INTEL_OUTPUT_DISPLAYPORT);
clone_mask = 0;
break;
case INTEL_OUTPUT_EDP:
crtc_mask = (1 << 1);
clone_mask = (1 << INTEL_OUTPUT_EDP);
clone_mask = 0;
}
encoder->possible_crtcs = crtc_mask;
encoder->possible_clones =

View File

@ -27,6 +27,8 @@
#include <linux/delay.h>
#include <drm/drm_simple_kms_helper.h>
#include "mdfld_dsi_dpi.h"
#include "mdfld_dsi_pkg_sender.h"
#include "mdfld_output.h"
@ -993,10 +995,7 @@ struct mdfld_dsi_encoder *mdfld_dsi_dpi_init(struct drm_device *dev,
/*create drm encoder object*/
connector = &dsi_connector->base.base;
encoder = &dpi_output->base.base.base;
drm_encoder_init(dev,
encoder,
p_funcs->encoder_funcs,
DRM_MODE_ENCODER_LVDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_LVDS);
drm_encoder_helper_add(encoder,
p_funcs->encoder_helper_funcs);
@ -1006,10 +1005,10 @@ struct mdfld_dsi_encoder *mdfld_dsi_dpi_init(struct drm_device *dev,
/*set possible crtcs and clones*/
if (dsi_connector->pipe) {
encoder->possible_crtcs = (1 << 2);
encoder->possible_clones = (1 << 1);
encoder->possible_clones = 0;
} else {
encoder->possible_crtcs = (1 << 0);
encoder->possible_clones = (1 << 0);
encoder->possible_clones = 0;
}
dsi_connector->base.encoder = &dpi_output->base.base;

View File

@ -51,7 +51,6 @@ struct panel_info {
};
struct panel_funcs {
const struct drm_encoder_funcs *encoder_funcs;
const struct drm_encoder_helper_funcs *encoder_helper_funcs;
struct drm_display_mode * (*get_config_mode)(struct drm_device *);
int (*get_panel_info)(struct drm_device *, int, struct panel_info *);

View File

@ -188,13 +188,7 @@ static const struct drm_encoder_helper_funcs
.commit = mdfld_dsi_dpi_commit,
};
/*TPO DPI encoder funcs*/
static const struct drm_encoder_funcs mdfld_tpo_dpi_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
const struct panel_funcs mdfld_tmd_vid_funcs = {
.encoder_funcs = &mdfld_tpo_dpi_encoder_funcs,
.encoder_helper_funcs = &mdfld_tpo_dpi_encoder_helper_funcs,
.get_config_mode = &tmd_vid_get_config_mode,
.get_panel_info = tmd_vid_get_panel_info,

View File

@ -76,13 +76,7 @@ static const struct drm_encoder_helper_funcs
.commit = mdfld_dsi_dpi_commit,
};
/*TPO DPI encoder funcs*/
static const struct drm_encoder_funcs mdfld_tpo_dpi_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
const struct panel_funcs mdfld_tpo_vid_funcs = {
.encoder_funcs = &mdfld_tpo_dpi_encoder_funcs,
.encoder_helper_funcs = &mdfld_tpo_dpi_encoder_helper_funcs,
.get_config_mode = &tpo_vid_get_config_mode,
.get_panel_info = tpo_vid_get_panel_info,

View File

@ -27,6 +27,7 @@
#include <linux/delay.h>
#include <drm/drm.h>
#include <drm/drm_simple_kms_helper.h>
#include "psb_drv.h"
#include "psb_intel_drv.h"
@ -620,15 +621,6 @@ static const struct drm_connector_funcs oaktrail_hdmi_connector_funcs = {
.destroy = oaktrail_hdmi_destroy,
};
static void oaktrail_hdmi_enc_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
}
static const struct drm_encoder_funcs oaktrail_hdmi_enc_funcs = {
.destroy = oaktrail_hdmi_enc_destroy,
};
void oaktrail_hdmi_init(struct drm_device *dev,
struct psb_intel_mode_device *mode_dev)
{
@ -651,9 +643,7 @@ void oaktrail_hdmi_init(struct drm_device *dev,
&oaktrail_hdmi_connector_funcs,
DRM_MODE_CONNECTOR_DVID);
drm_encoder_init(dev, encoder,
&oaktrail_hdmi_enc_funcs,
DRM_MODE_ENCODER_TMDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_TMDS);
gma_connector_attach_encoder(gma_connector, gma_encoder);

View File

@ -13,6 +13,8 @@
#include <asm/intel-mid.h>
#include <drm/drm_simple_kms_helper.h>
#include "intel_bios.h"
#include "power.h"
#include "psb_drv.h"
@ -311,8 +313,7 @@ void oaktrail_lvds_init(struct drm_device *dev,
&psb_intel_lvds_connector_funcs,
DRM_MODE_CONNECTOR_LVDS);
drm_encoder_init(dev, encoder, &psb_intel_lvds_enc_funcs,
DRM_MODE_ENCODER_LVDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_LVDS);
gma_connector_attach_encoder(gma_connector, gma_encoder);
gma_encoder->type = INTEL_OUTPUT_LVDS;

View File

@ -252,7 +252,6 @@ extern int psb_intel_lvds_set_property(struct drm_connector *connector,
struct drm_property *property,
uint64_t value);
extern void psb_intel_lvds_destroy(struct drm_connector *connector);
extern const struct drm_encoder_funcs psb_intel_lvds_enc_funcs;
/* intel_gmbus.c */
extern void gma_intel_i2c_reset(struct drm_device *dev);

View File

@ -11,6 +11,8 @@
#include <linux/i2c.h>
#include <linux/pm_runtime.h>
#include <drm/drm_simple_kms_helper.h>
#include "intel_bios.h"
#include "power.h"
#include "psb_drv.h"
@ -621,18 +623,6 @@ const struct drm_connector_funcs psb_intel_lvds_connector_funcs = {
.destroy = psb_intel_lvds_destroy,
};
static void psb_intel_lvds_enc_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
}
const struct drm_encoder_funcs psb_intel_lvds_enc_funcs = {
.destroy = psb_intel_lvds_enc_destroy,
};
/**
* psb_intel_lvds_init - setup LVDS connectors on this device
* @dev: drm device
@ -683,9 +673,7 @@ void psb_intel_lvds_init(struct drm_device *dev,
&psb_intel_lvds_connector_funcs,
DRM_MODE_CONNECTOR_LVDS);
drm_encoder_init(dev, encoder,
&psb_intel_lvds_enc_funcs,
DRM_MODE_ENCODER_LVDS, NULL);
drm_simple_encoder_init(dev, encoder, DRM_MODE_ENCODER_LVDS);
gma_connector_attach_encoder(gma_connector, gma_encoder);
gma_encoder->type = INTEL_OUTPUT_LVDS;

View File

@ -747,11 +747,11 @@ static int cmi_lcd_hack_create_device(void)
return -EINVAL;
}
client = i2c_new_device(adapter, &info);
if (!client) {
pr_err("%s: i2c_new_device() failed\n", __func__);
client = i2c_new_client_device(adapter, &info);
if (IS_ERR(client)) {
pr_err("%s: creating I2C device failed\n", __func__);
i2c_put_adapter(adapter);
return -EINVAL;
return PTR_ERR(client);
}
return 0;
@ -765,12 +765,7 @@ static const struct drm_encoder_helper_funcs tc35876x_encoder_helper_funcs = {
.commit = mdfld_dsi_dpi_commit,
};
static const struct drm_encoder_funcs tc35876x_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
const struct panel_funcs mdfld_tc35876x_funcs = {
.encoder_funcs = &tc35876x_encoder_funcs,
.encoder_helper_funcs = &tc35876x_encoder_helper_funcs,
.get_config_mode = tc35876x_get_config_mode,
.get_panel_info = tc35876x_get_panel_info,

View File

@ -94,6 +94,10 @@ static int hibmc_plane_atomic_check(struct drm_plane *plane,
return -EINVAL;
}
if (state->fb->pitches[0] % 128 != 0) {
DRM_DEBUG_ATOMIC("wrong stride with 128-byte aligned\n");
return -EINVAL;
}
return 0;
}
@ -119,11 +123,8 @@ static void hibmc_plane_atomic_update(struct drm_plane *plane,
writel(gpu_addr, priv->mmio + HIBMC_CRT_FB_ADDRESS);
reg = state->fb->width * (state->fb->format->cpp[0]);
/* now line_pad is 16 */
reg = PADDING(16, reg);
line_l = state->fb->width * state->fb->format->cpp[0];
line_l = PADDING(16, line_l);
line_l = state->fb->pitches[0];
writel(HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_WIDTH, reg) |
HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_OFFS, line_l),
priv->mmio + HIBMC_CRT_FB_WIDTH);

View File

@ -94,7 +94,7 @@ static int hibmc_kms_init(struct hibmc_drm_private *priv)
priv->dev->mode_config.max_height = 1200;
priv->dev->mode_config.fb_base = priv->fb_base;
priv->dev->mode_config.preferred_depth = 24;
priv->dev->mode_config.preferred_depth = 32;
priv->dev->mode_config.prefer_shadow = 1;
priv->dev->mode_config.funcs = (void *)&hibmc_mode_funcs;
@ -307,11 +307,7 @@ static int hibmc_load(struct drm_device *dev)
/* reset all the states of crtc/plane/encoder/connector */
drm_mode_config_reset(dev);
ret = drm_fbdev_generic_setup(dev, 16);
if (ret) {
DRM_ERROR("failed to initialize fbdev: %d\n", ret);
goto err;
}
drm_fbdev_generic_setup(dev, dev->mode_config.preferred_depth);
return 0;

View File

@ -50,7 +50,7 @@ void hibmc_mm_fini(struct hibmc_drm_private *hibmc)
int hibmc_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
return drm_gem_vram_fill_create_dumb(file, dev, 0, 16, args);
return drm_gem_vram_fill_create_dumb(file, dev, 0, 128, args);
}
const struct drm_mode_config_funcs hibmc_mode_funcs = {

Some files were not shown because too many files have changed in this diff Show More