Merge tag 'drm-misc-next-2024-10-09' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for v6.13:

UAPI Changes:
- Add drm fdinfo support to panthor, and add sysfs knob to toggle.

Cross-subsystem Changes:
- Convert fbdev drivers to use backlight power constants.
- Some small dma-fence fixes.
- Some kernel-doc fixes.

Core Changes:
- Small drm client fixes.
- Document requirements that you need to file a bug before marking a test as flaky.
- Remove swapped and pinned bo's from TTM lru list.

Driver Changes:
- Assorted small fixes to panel/elida-kd35t133, nouveau, vc4, imx.
- Fix some bridges to drop cached edids on power off.
- Add Jenson BL-JT60050-01A, Samsung s6e3ha8 & AMS639RQ08 panels.
- Make 180° rotation work on ilitek-ili9881c, even for already-rotated
  panels.
-

Signed-off-by: Dave Airlie <airlied@redhat.com>

# Conflicts:
#	drivers/gpu/drm/panthor/panthor_drv.c
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/8dc111ca-d20c-4e0d-856e-c12d208cbf2a@linux.intel.com
This commit is contained in:
Dave Airlie 2024-10-11 05:39:09 +10:00
commit aa628ebb06
76 changed files with 1771 additions and 228 deletions

View File

@ -0,0 +1,10 @@
What: /sys/bus/platform/drivers/panthor/.../profiling
Date: September 2024
KernelVersion: 6.11.0
Contact: Adrian Larumbe <adrian.larumbe@collabora.com>
Description:
Bitmask to enable drm fdinfo's job profiling measurements.
Valid values are:
0: Don't enable fdinfo job profiling sources.
1: Enable GPU cycle measurements for running jobs.
2: Enable GPU timestamp sampling for running jobs.

View File

@ -50,6 +50,8 @@ properties:
- hannstar,hsd101pww2
# Hydis Technologies 7" WXGA (800x1280) TFT LCD LVDS panel
- hydis,hv070wx2-1e0
# Jenson Display BL-JT60050-01A 7" WSVGA (1024x600) color TFT LCD LVDS panel
- jenson,bl-jt60050-01a
- tbs,a711-panel
- const: panel-lvds

View File

@ -0,0 +1,80 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/samsung,ams639rq08.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Samsung AMS639RQ08 EA8076-based 6.39" 1080x2340 MIPI-DSI Panel
maintainers:
- Danila Tikhonov <danila@jiaxyga.com>
- Jens Reidel <adrian@travitia.xyz>
description:
The Samsung AMS639RQ08 is a 6.39 inch 1080x2340 MIPI-DSI CMD mode AMOLED panel.
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: samsung,ams639rq08
reg:
maxItems: 1
vdd3p3-supply:
description: 3.3V source voltage rail
vddio-supply:
description: I/O source voltage rail
vsn-supply:
description: Negative source voltage rail
vsp-supply:
description: Positive source voltage rail
reset-gpios: true
port: true
required:
- compatible
- reg
- vdd3p3-supply
- vddio-supply
- vsn-supply
- vsp-supply
- reset-gpios
- port
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "samsung,ams639rq08";
reg = <0>;
vdd3p3-supply = <&vreg_l18a_2p8>;
vddio-supply = <&vreg_l13a_1p8>;
vsn-supply = <&vreg_ibb>;
vsp-supply = <&vreg_lab>;
reset-gpios = <&pm6150l_gpios 9 GPIO_ACTIVE_LOW>;
port {
panel_in: endpoint {
remote-endpoint = <&mdss_dsi0_out>;
};
};
};
};
...

View File

@ -0,0 +1,75 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/samsung,s6e3ha8.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Samsung s6e3ha8 AMOLED DSI panel
description: The s6e3ha8 is a 1440x2960 DPI display panel from Samsung Mobile
Displays (SMD).
maintainers:
- Dzmitry Sankouski <dsankouski@gmail.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: samsung,s6e3ha8
reg:
maxItems: 1
reset-gpios: true
port: true
vdd3-supply:
description: VDD regulator
vci-supply:
description: VCI regulator
vddr-supply:
description: VDDR regulator
required:
- compatible
- reset-gpios
- vdd3-supply
- vci-supply
- vddr-supply
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "samsung,s6e3ha8";
reg = <0>;
vci-supply = <&s2dos05_ldo4>;
vddr-supply = <&s2dos05_buck1>;
vdd3-supply = <&s2dos05_ldo1>;
te-gpios = <&tlmm 10 GPIO_ACTIVE_HIGH>;
reset-gpios = <&tlmm 6 GPIO_ACTIVE_HIGH>;
pinctrl-0 = <&sde_dsi_active &sde_te_active_sleep>;
pinctrl-1 = <&sde_dsi_suspend &sde_te_active_sleep>;
pinctrl-names = "default", "sleep";
port {
panel_in: endpoint {
remote-endpoint = <&mdss_dsi0_out>;
};
};
};
};
...

View File

@ -752,6 +752,8 @@ patternProperties:
description: Japan Display Inc.
"^jedec,.*":
description: JEDEC Solid State Technology Association
"^jenson,.*":
description: Jenson Display Co. Ltd.
"^jesurun,.*":
description: Shenzhen Jesurun Electronics Business Dept.
"^jethome,.*":

View File

@ -68,19 +68,25 @@ known to behave unreliably. These tests won't cause a job to fail regardless of
the result. They will still be run.
Each new flake entry must be associated with a link to the email reporting the
bug to the author of the affected driver, the board name or Device Tree name of
the board, the first kernel version affected, the IGT version used for tests,
and an approximation of the failure rate.
bug to the author of the affected driver or the relevant GitLab issue. The entry
must also include the board name or Device Tree name, the first kernel version
affected, the IGT version used for tests, and an approximation of the failure rate.
They should be provided under the following format::
# Bug Report: $LORE_OR_PATCHWORK_URL
# Bug Report: $LORE_URL_OR_GITLAB_ISSUE
# Board Name: broken-board.dtb
# Linux Version: 6.6-rc1
# IGT Version: 1.28-gd2af13d9f
# Failure Rate: 100
flaky-test
Use the appropriate link below to create a GitLab issue:
amdgpu driver: https://gitlab.freedesktop.org/drm/amd/-/issues
i915 driver: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues
msm driver: https://gitlab.freedesktop.org/drm/msm/-/issues
xe driver: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues
drivers/gpu/drm/ci/${DRIVER_NAME}-${HW_REVISION}-skips.txt
-----------------------------------------------------------

View File

@ -73,6 +73,11 @@ scope of each device, in which case `drm-pdev` shall be present as well.
Userspace should make sure to not double account any usage statistics by using
the above described criteria in order to associate data to individual clients.
- drm-client-name: <valstr>
String optionally set by userspace using DRM_IOCTL_SET_CLIENT_NAME.
Utilization
^^^^^^^^^^^

View File

@ -0,0 +1,46 @@
.. SPDX-License-Identifier: GPL-2.0+
=========================
drm/Panthor CSF driver
=========================
.. _panfrost-usage-stats:
Panthor DRM client usage stats implementation
==============================================
The drm/Panthor driver implements the DRM client usage stats specification as
documented in :ref:`drm-client-usage-stats`.
Example of the output showing the implemented key value pairs and entirety of
the currently possible format options:
::
pos: 0
flags: 02400002
mnt_id: 29
ino: 491
drm-driver: panthor
drm-client-id: 10
drm-engine-panthor: 111110952750 ns
drm-cycles-panthor: 94439687187
drm-maxfreq-panthor: 1000000000 Hz
drm-curfreq-panthor: 1000000000 Hz
drm-total-memory: 16480 KiB
drm-shared-memory: 0
drm-active-memory: 16200 KiB
drm-resident-memory: 16480 KiB
drm-purgeable-memory: 0
Possible `drm-engine-` key names are: `panthor`.
`drm-curfreq-` values convey the current operating frequency for that engine.
Users must bear in mind that engine and cycle sampling are disabled by default,
because of power saving concerns. `fdinfo` users and benchmark applications which
query the fdinfo file must make sure to toggle the job profiling status of the
driver by writing into the appropriate sysfs node::
echo <N> > /sys/bus/platform/drivers/panthor/[a-f0-9]*.gpu/profiling
Where `N` is a bit mask where cycle and timestamp sampling are respectively
enabled by the first and second bits.

View File

@ -7383,6 +7383,12 @@ S: Maintained
F: Documentation/devicetree/bindings/display/panel/samsung,s6d7aa0.yaml
F: drivers/gpu/drm/panel/panel-samsung-s6d7aa0.c
DRM DRIVER FOR SAMSUNG S6E3HA8 PANELS
M: Dzmitry Sankouski <dsankouski@gmail.com>
S: Maintained
F: Documentation/devicetree/bindings/display/panel/samsung,s6e3ha8.yaml
F: drivers/gpu/drm/panel/panel-samsung-s6e3ha8.c
DRM DRIVER FOR SITRONIX ST7586 PANELS
M: David Lechner <david@lechnology.com>
S: Maintained

View File

@ -412,7 +412,7 @@ int dma_fence_signal_timestamp(struct dma_fence *fence, ktime_t timestamp)
unsigned long flags;
int ret;
if (!fence)
if (WARN_ON(!fence))
return -EINVAL;
spin_lock_irqsave(fence->lock, flags);
@ -464,7 +464,7 @@ int dma_fence_signal(struct dma_fence *fence)
int ret;
bool tmp;
if (!fence)
if (WARN_ON(!fence))
return -EINVAL;
tmp = dma_fence_begin_signalling();

View File

@ -173,11 +173,6 @@ static bool timeline_fence_signaled(struct dma_fence *fence)
return !__dma_fence_is_later(fence->seqno, parent->value, fence->ops);
}
static bool timeline_fence_enable_signaling(struct dma_fence *fence)
{
return true;
}
static void timeline_fence_value_str(struct dma_fence *fence,
char *str, int size)
{
@ -211,7 +206,6 @@ static void timeline_fence_set_deadline(struct dma_fence *fence, ktime_t deadlin
static const struct dma_fence_ops timeline_fence_ops = {
.get_driver_name = timeline_fence_get_driver_name,
.get_timeline_name = timeline_fence_get_timeline_name,
.enable_signaling = timeline_fence_enable_signaling,
.signaled = timeline_fence_signaled,
.release = timeline_fence_release,
.fence_value_str = timeline_fence_value_str,

View File

@ -2551,6 +2551,8 @@ static int __maybe_unused anx7625_runtime_pm_suspend(struct device *dev)
mutex_lock(&ctx->lock);
anx7625_stop_dp_work(ctx);
if (!ctx->pdata.panel_bridge)
anx7625_remove_edid(ctx);
anx7625_power_standby(ctx);
mutex_unlock(&ctx->lock);

View File

@ -3107,6 +3107,8 @@ static __maybe_unused int it6505_bridge_suspend(struct device *dev)
{
struct it6505 *it6505 = dev_get_drvdata(dev);
it6505_remove_edid(it6505);
return it6505_poweroff(it6505);
}

View File

@ -145,7 +145,7 @@ drm_connector_fallback_non_tiled_mode(struct drm_connector *connector)
}
static struct drm_display_mode *
drm_connector_has_preferred_mode(struct drm_connector *connector, int width, int height)
drm_connector_preferred_mode(struct drm_connector *connector, int width, int height)
{
struct drm_display_mode *mode;
@ -159,6 +159,12 @@ drm_connector_has_preferred_mode(struct drm_connector *connector, int width, int
return NULL;
}
static struct drm_display_mode *drm_connector_first_mode(struct drm_connector *connector)
{
return list_first_entry_or_null(&connector->modes,
struct drm_display_mode, head);
}
static struct drm_display_mode *drm_connector_pick_cmdline_mode(struct drm_connector *connector)
{
struct drm_cmdline_mode *cmdline_mode;
@ -331,7 +337,7 @@ static bool drm_client_target_cloned(struct drm_device *dev,
if (!modes[i])
can_clone = false;
}
kfree(dmt_mode);
drm_mode_destroy(dev, dmt_mode);
if (can_clone) {
drm_dbg_kms(dev, "can clone using 1024x768\n");
@ -441,13 +447,11 @@ retry:
drm_dbg_kms(dev, "[CONNECTOR:%d:%s] looking for preferred mode, tile %d\n",
connector->base.id, connector->name,
connector->tile_group ? connector->tile_group->id : 0);
modes[i] = drm_connector_has_preferred_mode(connector, width, height);
modes[i] = drm_connector_preferred_mode(connector, width, height);
}
/* No preferred modes, pick one off the list */
if (!modes[i] && !list_empty(&connector->modes)) {
list_for_each_entry(modes[i], &connector->modes, head)
break;
}
if (!modes[i])
modes[i] = drm_connector_first_mode(connector);
/*
* In case of tiled mode if all tiles not present fallback to
* first available non tiled mode.
@ -531,7 +535,7 @@ static int drm_client_pick_crtcs(struct drm_client_dev *client,
my_score++;
if (connector->cmdline_mode.specified)
my_score++;
if (drm_connector_has_preferred_mode(connector, width, height))
if (drm_connector_preferred_mode(connector, width, height))
my_score++;
/*
@ -686,16 +690,14 @@ retry:
"[CONNECTOR:%d:%s] looking for preferred mode, has tile: %s\n",
connector->base.id, connector->name,
str_yes_no(connector->has_tile));
modes[i] = drm_connector_has_preferred_mode(connector, width, height);
modes[i] = drm_connector_preferred_mode(connector, width, height);
}
/* No preferred mode marked by the EDID? Are there any modes? */
if (!modes[i] && !list_empty(&connector->modes)) {
drm_dbg_kms(dev, "[CONNECTOR:%d:%s] using first listed mode\n",
connector->base.id, connector->name);
modes[i] = list_first_entry(&connector->modes,
struct drm_display_mode,
head);
modes[i] = drm_connector_first_mode(connector);
}
/* last resort: use current mode */
@ -878,7 +880,7 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width,
break;
}
kfree(modeset->mode);
drm_mode_destroy(dev, modeset->mode);
modeset->mode = drm_mode_duplicate(dev, mode);
if (!modeset->mode) {
ret = -ENOMEM;

View File

@ -78,12 +78,14 @@ static int drm_clients_info(struct seq_file *m, void *data)
kuid_t uid;
seq_printf(m,
"%20s %5s %3s master a %5s %10s\n",
"%20s %5s %3s master a %5s %10s %*s\n",
"command",
"tgid",
"dev",
"uid",
"magic");
"magic",
DRM_CLIENT_NAME_MAX_LEN,
"name");
/* dev->filelist is sorted youngest first, but we want to present
* oldest first (i.e. kernel, servers, clients), so walk backwardss.
@ -94,19 +96,23 @@ static int drm_clients_info(struct seq_file *m, void *data)
struct task_struct *task;
struct pid *pid;
mutex_lock(&priv->client_name_lock);
rcu_read_lock(); /* Locks priv->pid and pid_task()->comm! */
pid = rcu_dereference(priv->pid);
task = pid_task(pid, PIDTYPE_TGID);
uid = task ? __task_cred(task)->euid : GLOBAL_ROOT_UID;
seq_printf(m, "%20s %5d %3d %c %c %5d %10u\n",
seq_printf(m, "%20s %5d %3d %c %c %5d %10u %*s\n",
task ? task->comm : "<unknown>",
pid_vnr(pid),
priv->minor->index,
is_current_master ? 'y' : 'n',
priv->authenticated ? 'y' : 'n',
from_kuid_munged(seq_user_ns(m), uid),
priv->magic);
priv->magic,
DRM_CLIENT_NAME_MAX_LEN,
priv->client_name ? priv->client_name : "<unset>");
rcu_read_unlock();
mutex_unlock(&priv->client_name_lock);
}
mutex_unlock(&dev->filelist_mutex);
return 0;

View File

@ -157,6 +157,7 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor)
spin_lock_init(&file->master_lookup_lock);
mutex_init(&file->event_read_lock);
mutex_init(&file->client_name_lock);
if (drm_core_check_feature(dev, DRIVER_GEM))
drm_gem_open(dev, file);
@ -258,6 +259,10 @@ void drm_file_free(struct drm_file *file)
WARN_ON(!list_empty(&file->event_list));
put_pid(rcu_access_pointer(file->pid));
mutex_destroy(&file->client_name_lock);
kfree(file->client_name);
kfree(file);
}
@ -950,6 +955,11 @@ void drm_show_fdinfo(struct seq_file *m, struct file *f)
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
}
mutex_lock(&file->client_name_lock);
if (file->client_name)
drm_printf(&p, "drm-client-name:\t%s\n", file->client_name);
mutex_unlock(&file->client_name_lock);
if (dev->driver->show_fdinfo)
dev->driver->show_fdinfo(&p, file);
}

View File

@ -540,6 +540,55 @@ int drm_version(struct drm_device *dev, void *data,
return err;
}
/*
* Check if the passed string contains control char or spaces or
* anything that would mess up a formatted output.
*/
static int drm_validate_value_string(const char *value, size_t len)
{
int i;
for (i = 0; i < len; i++) {
if (!isascii(value[i]) || !isgraph(value[i]))
return -EINVAL;
}
return 0;
}
static int drm_set_client_name(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_set_client_name *name = data;
size_t len = name->name_len;
void __user *user_ptr;
char *new_name;
if (len > DRM_CLIENT_NAME_MAX_LEN) {
return -EINVAL;
} else if (len) {
user_ptr = u64_to_user_ptr(name->name);
new_name = memdup_user_nul(user_ptr, len);
if (IS_ERR(new_name))
return PTR_ERR(new_name);
if (strlen(new_name) != len ||
drm_validate_value_string(new_name, len) < 0) {
kfree(new_name);
return -EINVAL;
}
} else {
new_name = NULL;
}
mutex_lock(&file_priv->client_name_lock);
kfree(file_priv->client_name);
file_priv->client_name = new_name;
mutex_unlock(&file_priv->client_name_lock);
return 0;
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
@ -610,6 +659,8 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_NAME, drm_set_client_name, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, 0),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, 0),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER),

View File

@ -1520,6 +1520,22 @@ void mipi_dsi_compression_mode_ext_multi(struct mipi_dsi_multi_context *ctx,
}
EXPORT_SYMBOL(mipi_dsi_compression_mode_ext_multi);
/**
* mipi_dsi_compression_mode_multi() - enable/disable DSC on the peripheral
* @dsi: DSI peripheral device
* @enable: Whether to enable or disable the DSC
*
* Enable or disable Display Stream Compression on the peripheral using the
* default Picture Parameter Set and VESA DSC 1.1 algorithm.
*/
void mipi_dsi_compression_mode_multi(struct mipi_dsi_multi_context *ctx,
bool enable)
{
return mipi_dsi_compression_mode_ext_multi(ctx, enable,
MIPI_DSI_COMPRESSION_DSC, 0);
}
EXPORT_SYMBOL(mipi_dsi_compression_mode_multi);
/**
* mipi_dsi_dcs_nop_multi() - send DCS NOP packet
* @ctx: Context for multiple DSI transactions

View File

@ -100,15 +100,9 @@ drm_writeback_fence_get_timeline_name(struct dma_fence *fence)
return wb_connector->timeline_name;
}
static bool drm_writeback_fence_enable_signaling(struct dma_fence *fence)
{
return true;
}
static const struct dma_fence_ops drm_writeback_fence_ops = {
.get_driver_name = drm_writeback_fence_get_driver_name,
.get_timeline_name = drm_writeback_fence_get_timeline_name,
.enable_signaling = drm_writeback_fence_enable_signaling,
};
static int create_writeback_properties(struct drm_device *dev)

View File

@ -808,7 +808,7 @@ static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
}
if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
ret = ttm_bo_populate(bo, &ctx);
if (ret)
return ret;

View File

@ -624,7 +624,7 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict,
/* Populate ttm with pages if needed. Typically system memory. */
if (ttm && (dst_man->use_tt || (ttm->page_flags & TTM_TT_FLAG_SWAPPED))) {
ret = ttm_tt_populate(bo->bdev, ttm, ctx);
ret = ttm_bo_populate(bo, ctx);
if (ret)
return ret;
}

View File

@ -90,7 +90,7 @@ static int i915_ttm_backup(struct i915_gem_apply_to_region *apply,
goto out_no_lock;
backup_bo = i915_gem_to_ttm(backup);
err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx);
err = ttm_bo_populate(backup_bo, &ctx);
if (err)
goto out_no_populate;
@ -189,7 +189,7 @@ static int i915_ttm_restore(struct i915_gem_apply_to_region *apply,
if (!backup_bo->resource)
err = ttm_bo_validate(backup_bo, i915_ttm_sys_placement(), &ctx);
if (!err)
err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx);
err = ttm_bo_populate(backup_bo, &ctx);
if (!err) {
err = i915_gem_obj_copy_ttm(obj, backup, pm_apply->allow_gpu,
false);

View File

@ -25,7 +25,7 @@ nvkm-y += nvkm/subdev/i2c/busnv50.o
nvkm-y += nvkm/subdev/i2c/busgf119.o
nvkm-y += nvkm/subdev/i2c/bit.o
nvkm-y += nvkm/subdev/i2c/aux.o
nvkm-y += nvkm/subdev/i2c/auxch.o
nvkm-y += nvkm/subdev/i2c/auxg94.o
nvkm-y += nvkm/subdev/i2c/auxgf119.o
nvkm-y += nvkm/subdev/i2c/auxgm200.o

View File

@ -24,7 +24,7 @@
#define anx9805_pad(p) container_of((p), struct anx9805_pad, base)
#define anx9805_bus(p) container_of((p), struct anx9805_bus, base)
#define anx9805_aux(p) container_of((p), struct anx9805_aux, base)
#include "aux.h"
#include "auxch.h"
#include "bus.h"
struct anx9805_pad {

View File

@ -24,7 +24,7 @@
#include <linux/string_helpers.h>
#include "aux.h"
#include "auxch.h"
#include "pad.h"
static int

View File

@ -22,7 +22,7 @@
* Authors: Ben Skeggs <bskeggs@redhat.com>
*/
#define g94_i2c_aux(p) container_of((p), struct g94_i2c_aux, base)
#include "aux.h"
#include "auxch.h"
struct g94_i2c_aux {
struct nvkm_i2c_aux base;

View File

@ -19,7 +19,7 @@
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include "aux.h"
#include "auxch.h"
static const struct nvkm_i2c_aux_func
gf119_i2c_aux = {

View File

@ -22,7 +22,7 @@
* Authors: Ben Skeggs <bskeggs@redhat.com>
*/
#define gm200_i2c_aux(p) container_of((p), struct gm200_i2c_aux, base)
#include "aux.h"
#include "auxch.h"
struct gm200_i2c_aux {
struct nvkm_i2c_aux base;

View File

@ -22,7 +22,7 @@
* Authors: Ben Skeggs
*/
#include "priv.h"
#include "aux.h"
#include "auxch.h"
#include "bus.h"
#include "pad.h"

View File

@ -22,7 +22,7 @@
* Authors: Ben Skeggs
*/
#include "pad.h"
#include "aux.h"
#include "auxch.h"
#include "bus.h"
void

View File

@ -22,7 +22,7 @@
* Authors: Ben Skeggs
*/
#include "pad.h"
#include "aux.h"
#include "auxch.h"
#include "bus.h"
static const struct nvkm_i2c_pad_func

View File

@ -22,7 +22,7 @@
* Authors: Ben Skeggs
*/
#include "pad.h"
#include "aux.h"
#include "auxch.h"
#include "bus.h"
static void

View File

@ -614,6 +614,15 @@ config DRM_PANEL_RONBO_RB070D30
Say Y here if you want to enable support for Ronbo Electronics
RB070D30 1024x600 DSI panel.
config DRM_PANEL_SAMSUNG_AMS639RQ08
tristate "Samsung AMS639RQ08 panel"
depends on OF
depends on DRM_MIPI_DSI
depends on BACKLIGHT_CLASS_DEVICE
help
Say Y or M here if you want to enable support for the
Samsung AMS639RQ08 FHD Plus (2340x1080@60Hz) CMD mode panel.
config DRM_PANEL_SAMSUNG_S6E88A0_AMS452EF01
tristate "Samsung AMS452EF01 panel with S6E88A0 DSI video mode controller"
depends on OF
@ -689,6 +698,13 @@ config DRM_PANEL_SAMSUNG_S6E3HA2
depends on BACKLIGHT_CLASS_DEVICE
select VIDEOMODE_HELPERS
config DRM_PANEL_SAMSUNG_S6E3HA8
tristate "Samsung S6E3HA8 DSI video mode panel"
depends on OF
depends on DRM_MIPI_DSI
depends on BACKLIGHT_CLASS_DEVICE
select VIDEOMODE_HELPERS
config DRM_PANEL_SAMSUNG_S6E63J0X03
tristate "Samsung S6E63J0X03 DSI command mode panel"
depends on OF

View File

@ -62,6 +62,7 @@ obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM68200) += panel-raydium-rm68200.o
obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM692E5) += panel-raydium-rm692e5.o
obj-$(CONFIG_DRM_PANEL_RAYDIUM_RM69380) += panel-raydium-rm69380.o
obj-$(CONFIG_DRM_PANEL_RONBO_RB070D30) += panel-ronbo-rb070d30.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_AMS639RQ08) += panel-samsung-ams639rq08.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_ATNA33XC20) += panel-samsung-atna33xc20.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_DB7430) += panel-samsung-db7430.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_LD9040) += panel-samsung-ld9040.o
@ -70,6 +71,7 @@ obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6D27A1) += panel-samsung-s6d27a1.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6D7AA0) += panel-samsung-s6d7aa0.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3FA7) += panel-samsung-s6e3fa7.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3HA2) += panel-samsung-s6e3ha2.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3HA8) += panel-samsung-s6e3ha8.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03) += panel-samsung-s6e63j0x03.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63M0) += panel-samsung-s6e63m0.o
obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63M0_SPI) += panel-samsung-s6e63m0-spi.o

View File

@ -50,55 +50,44 @@ static inline struct kd35t133 *panel_to_kd35t133(struct drm_panel *panel)
return container_of(panel, struct kd35t133, panel);
}
static int kd35t133_init_sequence(struct kd35t133 *ctx)
static void kd35t133_init_sequence(struct mipi_dsi_multi_context *dsi_ctx)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
struct device *dev = ctx->dev;
/*
* Init sequence was supplied by the panel vendor with minimal
* documentation.
*/
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_POSITIVEGAMMA,
0x00, 0x13, 0x18, 0x04, 0x0f, 0x06, 0x3a, 0x56,
0x4d, 0x03, 0x0a, 0x06, 0x30, 0x3e, 0x0f);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_NEGATIVEGAMMA,
0x00, 0x13, 0x18, 0x01, 0x11, 0x06, 0x38, 0x34,
0x4d, 0x06, 0x0d, 0x0b, 0x31, 0x37, 0x0f);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_POWERCONTROL1, 0x18, 0x17);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_POWERCONTROL2, 0x41);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_VCOMCONTROL, 0x00, 0x1a, 0x80);
mipi_dsi_dcs_write_seq(dsi, MIPI_DCS_SET_ADDRESS_MODE, 0x48);
mipi_dsi_dcs_write_seq(dsi, MIPI_DCS_SET_PIXEL_FORMAT, 0x55);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_INTERFACEMODECTRL, 0x00);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_FRAMERATECTRL, 0xa0);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_DISPLAYINVERSIONCTRL, 0x02);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_DISPLAYFUNCTIONCTRL,
0x20, 0x02);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_SETIMAGEFUNCTION, 0x00);
mipi_dsi_dcs_write_seq(dsi, KD35T133_CMD_ADJUSTCONTROL3,
0xa9, 0x51, 0x2c, 0x82);
mipi_dsi_dcs_write(dsi, MIPI_DCS_ENTER_INVERT_MODE, NULL, 0);
dev_dbg(dev, "Panel init sequence done\n");
return 0;
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_POSITIVEGAMMA,
0x00, 0x13, 0x18, 0x04, 0x0f, 0x06, 0x3a, 0x56,
0x4d, 0x03, 0x0a, 0x06, 0x30, 0x3e, 0x0f);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_NEGATIVEGAMMA,
0x00, 0x13, 0x18, 0x01, 0x11, 0x06, 0x38, 0x34,
0x4d, 0x06, 0x0d, 0x0b, 0x31, 0x37, 0x0f);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_POWERCONTROL1, 0x18, 0x17);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_POWERCONTROL2, 0x41);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_VCOMCONTROL, 0x00, 0x1a, 0x80);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, MIPI_DCS_SET_ADDRESS_MODE, 0x48);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, MIPI_DCS_SET_PIXEL_FORMAT, 0x55);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_INTERFACEMODECTRL, 0x00);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_FRAMERATECTRL, 0xa0);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_DISPLAYINVERSIONCTRL, 0x02);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_DISPLAYFUNCTIONCTRL,
0x20, 0x02);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_SETIMAGEFUNCTION, 0x00);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, KD35T133_CMD_ADJUSTCONTROL3,
0xa9, 0x51, 0x2c, 0x82);
mipi_dsi_dcs_write_seq_multi(dsi_ctx, MIPI_DCS_ENTER_INVERT_MODE);
}
static int kd35t133_unprepare(struct drm_panel *panel)
{
struct kd35t133 *ctx = panel_to_kd35t133(panel);
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
int ret;
struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi };
ret = mipi_dsi_dcs_set_display_off(dsi);
if (ret < 0)
dev_err(ctx->dev, "failed to set display off: %d\n", ret);
ret = mipi_dsi_dcs_enter_sleep_mode(dsi);
if (ret < 0) {
dev_err(ctx->dev, "failed to enter sleep mode: %d\n", ret);
return ret;
}
mipi_dsi_dcs_set_display_off_multi(&dsi_ctx);
mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx);
if (dsi_ctx.accum_err)
return dsi_ctx.accum_err;
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
@ -112,18 +101,20 @@ static int kd35t133_prepare(struct drm_panel *panel)
{
struct kd35t133 *ctx = panel_to_kd35t133(panel);
struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev);
int ret;
struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi };
dev_dbg(ctx->dev, "Resetting the panel\n");
ret = regulator_enable(ctx->vdd);
if (ret < 0) {
dev_err(ctx->dev, "Failed to enable vdd supply: %d\n", ret);
return ret;
dsi_ctx.accum_err = regulator_enable(ctx->vdd);
if (dsi_ctx.accum_err) {
dev_err(ctx->dev, "Failed to enable vdd supply: %d\n",
dsi_ctx.accum_err);
return dsi_ctx.accum_err;
}
ret = regulator_enable(ctx->iovcc);
if (ret < 0) {
dev_err(ctx->dev, "Failed to enable iovcc supply: %d\n", ret);
dsi_ctx.accum_err = regulator_enable(ctx->iovcc);
if (dsi_ctx.accum_err) {
dev_err(ctx->dev, "Failed to enable iovcc supply: %d\n",
dsi_ctx.accum_err);
goto disable_vdd;
}
@ -135,27 +126,18 @@ static int kd35t133_prepare(struct drm_panel *panel)
msleep(20);
ret = mipi_dsi_dcs_exit_sleep_mode(dsi);
if (ret < 0) {
dev_err(ctx->dev, "Failed to exit sleep mode: %d\n", ret);
mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx);
mipi_dsi_msleep(&dsi_ctx, 250);
kd35t133_init_sequence(&dsi_ctx);
if (!dsi_ctx.accum_err)
dev_dbg(ctx->dev, "Panel init sequence done\n");
mipi_dsi_dcs_set_display_on_multi(&dsi_ctx);
mipi_dsi_msleep(&dsi_ctx, 50);
if (dsi_ctx.accum_err)
goto disable_iovcc;
}
msleep(250);
ret = kd35t133_init_sequence(ctx);
if (ret < 0) {
dev_err(ctx->dev, "Panel init sequence failed: %d\n", ret);
goto disable_iovcc;
}
ret = mipi_dsi_dcs_set_display_on(dsi);
if (ret < 0) {
dev_err(ctx->dev, "Failed to set display on: %d\n", ret);
goto disable_iovcc;
}
msleep(50);
return 0;
@ -163,7 +145,7 @@ disable_iovcc:
regulator_disable(ctx->iovcc);
disable_vdd:
regulator_disable(ctx->vdd);
return ret;
return dsi_ctx.accum_err;
}
static const struct drm_display_mode default_mode = {

View File

@ -42,6 +42,7 @@ struct ili9881c_desc {
const size_t init_length;
const struct drm_display_mode *mode;
const unsigned long mode_flags;
u8 default_address_mode;
};
struct ili9881c {
@ -53,6 +54,7 @@ struct ili9881c {
struct gpio_desc *reset;
enum drm_panel_orientation orientation;
u8 address_mode;
};
#define ILI9881C_SWITCH_PAGE_INSTR(_page) \
@ -815,8 +817,6 @@ static const struct ili9881c_instr tl050hdv35_init[] = {
ILI9881C_COMMAND_INSTR(0xd1, 0x4b),
ILI9881C_COMMAND_INSTR(0xd2, 0x60),
ILI9881C_COMMAND_INSTR(0xd3, 0x39),
ILI9881C_SWITCH_PAGE_INSTR(0),
ILI9881C_COMMAND_INSTR(0x36, 0x03),
};
static const struct ili9881c_instr w552946ab_init[] = {
@ -1299,6 +1299,14 @@ static int ili9881c_prepare(struct drm_panel *panel)
if (ret)
return ret;
if (ctx->address_mode) {
ret = mipi_dsi_dcs_write(ctx->dsi, MIPI_DCS_SET_ADDRESS_MODE,
&ctx->address_mode,
sizeof(ctx->address_mode));
if (ret < 0)
return ret;
}
ret = mipi_dsi_dcs_set_tear_on(ctx->dsi, MIPI_DSI_DCS_TEAR_MODE_VBLANK);
if (ret)
return ret;
@ -1463,6 +1471,10 @@ static int ili9881c_get_modes(struct drm_panel *panel,
connector->display_info.width_mm = mode->width_mm;
connector->display_info.height_mm = mode->height_mm;
if (ctx->address_mode == 0x3)
connector->display_info.subpixel_order = SubPixelHorizontalBGR;
else
connector->display_info.subpixel_order = SubPixelHorizontalRGB;
/*
* TODO: Remove once all drm drivers call
@ -1521,6 +1533,12 @@ static int ili9881c_dsi_probe(struct mipi_dsi_device *dsi)
return ret;
}
ctx->address_mode = ctx->desc->default_address_mode;
if (ctx->orientation == DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP) {
ctx->address_mode ^= 0x03;
ctx->orientation = DRM_MODE_PANEL_ORIENTATION_NORMAL;
}
ctx->panel.prepare_prev_first = true;
ret = drm_panel_of_backlight(&ctx->panel);
@ -1572,6 +1590,7 @@ static const struct ili9881c_desc tl050hdv35_desc = {
.mode = &tl050hdv35_default_mode,
.mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
MIPI_DSI_MODE_LPM,
.default_address_mode = 0x03,
};
static const struct ili9881c_desc w552946aba_desc = {

View File

@ -26,7 +26,6 @@ struct jadard_panel_desc {
unsigned int lanes;
enum mipi_dsi_pixel_format format;
int (*init)(struct jadard *jadard);
u32 num_init_cmds;
bool lp11_before_reset;
bool reset_before_power_off_vcioo;
unsigned int vcioo_to_lp11_delay_ms;

View File

@ -0,0 +1,329 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2024, Danila Tikhonov <danila@jiaxyga.com>
*/
#include <linux/backlight.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/regulator/consumer.h>
#include <video/mipi_display.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_modes.h>
#include <drm/drm_panel.h>
#include <drm/drm_probe_helper.h>
/* Manufacturer Command Set */
#define MCS_ACCESS_PROT_OFF 0xb0
#define MCS_UNKNOWN_B7 0xb7
#define MCS_BIAS_CURRENT_CTRL 0xd1
#define MCS_PASSWD1 0xf0
#define MCS_PASSWD2 0xfc
#define MCS_UNKNOWN_FF 0xff
struct ams639rq08 {
struct drm_panel panel;
struct mipi_dsi_device *dsi;
struct gpio_desc *reset_gpio;
struct regulator_bulk_data *supplies;
};
static const struct regulator_bulk_data ams639rq08_supplies[] = {
{ .supply = "vdd3p3" },
{ .supply = "vddio" },
{ .supply = "vsn" },
{ .supply = "vsp" },
};
static inline struct ams639rq08 *to_ams639rq08(struct drm_panel *panel)
{
return container_of(panel, struct ams639rq08, panel);
}
static void ams639rq08_reset(struct ams639rq08 *ctx)
{
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
usleep_range(1000, 2000);
gpiod_set_value_cansleep(ctx->reset_gpio, 0);
usleep_range(10000, 11000);
}
static int ams639rq08_on(struct ams639rq08 *ctx)
{
struct mipi_dsi_device *dsi = ctx->dsi;
struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi };
/* Delay 2ms for VCI1 power */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD1, 0x5a, 0x5a);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD2, 0x5a, 0x5a);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ACCESS_PROT_OFF, 0x0c);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_UNKNOWN_FF, 0x10);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ACCESS_PROT_OFF, 0x2f);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_BIAS_CURRENT_CTRL, 0x01);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD1, 0xa5, 0xa5);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD2, 0xa5, 0xa5);
/* Sleep Out */
mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx);
usleep_range(10000, 11000);
/* TE OUT (Vsync On) */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD1, 0x5a, 0x5a);
mipi_dsi_dcs_set_tear_on_multi(&dsi_ctx, MIPI_DSI_DCS_TEAR_MODE_VBLANK);
/* DBV Smooth Transition */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_UNKNOWN_B7, 0x01, 0x4b);
/* Edge Dimming Speed Setting */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ACCESS_PROT_OFF, 0x06);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_UNKNOWN_B7, 0x10);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD1, 0xa5, 0xa5);
/* Page Address Set */
mipi_dsi_dcs_set_page_address_multi(&dsi_ctx, 0x0000, 0x0923);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD1, 0x5a, 0x5a);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD2, 0x5a, 0x5a);
/* Set DDIC internal HFP */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ACCESS_PROT_OFF, 0x23);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_BIAS_CURRENT_CTRL, 0x11);
/* OFC Setting 84.1 Mhz */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe9, 0x11, 0x55,
0xa6, 0x75, 0xa3,
0xb9, 0xa1, 0x4a,
0x00, 0x1a, 0xb8);
/* Err_FG Setting */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe1,
0x00, 0x00, 0x02,
0x02, 0x42, 0x02);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe2,
0x00, 0x00, 0x00,
0x00, 0x00, 0x00);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ACCESS_PROT_OFF, 0x0c);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe1, 0x19);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD1, 0xa5, 0xa5);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_PASSWD2, 0xa5, 0xa5);
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MIPI_DCS_WRITE_CONTROL_DISPLAY, 0x20);
/* Brightness Control */
mipi_dsi_dcs_set_display_brightness_multi(&dsi_ctx, 0x0000);
/* Display On */
mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MIPI_DCS_WRITE_POWER_SAVE, 0x00);
mipi_dsi_msleep(&dsi_ctx, 67);
mipi_dsi_dcs_set_display_on_multi(&dsi_ctx);
return dsi_ctx.accum_err;
}
static void ams639rq08_off(struct ams639rq08 *ctx)
{
struct mipi_dsi_device *dsi = ctx->dsi;
struct mipi_dsi_multi_context dsi_ctx = { .dsi = dsi };
mipi_dsi_dcs_set_display_off_multi(&dsi_ctx);
mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx);
mipi_dsi_msleep(&dsi_ctx, 120);
}
static int ams639rq08_prepare(struct drm_panel *panel)
{
struct ams639rq08 *ctx = to_ams639rq08(panel);
int ret;
ret = regulator_bulk_enable(ARRAY_SIZE(ams639rq08_supplies),
ctx->supplies);
if (ret < 0)
return ret;
ams639rq08_reset(ctx);
ret = ams639rq08_on(ctx);
if (ret < 0) {
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
regulator_bulk_disable(ARRAY_SIZE(ams639rq08_supplies),
ctx->supplies);
return ret;
}
return 0;
}
static int ams639rq08_unprepare(struct drm_panel *panel)
{
struct ams639rq08 *ctx = to_ams639rq08(panel);
ams639rq08_off(ctx);
gpiod_set_value_cansleep(ctx->reset_gpio, 1);
regulator_bulk_disable(ARRAY_SIZE(ams639rq08_supplies),
ctx->supplies);
return 0;
}
static const struct drm_display_mode ams639rq08_mode = {
.clock = (1080 + 64 + 20 + 64) * (2340 + 64 + 20 + 64) * 60 / 1000,
.hdisplay = 1080,
.hsync_start = 1080 + 64,
.hsync_end = 1080 + 64 + 20,
.htotal = 1080 + 64 + 20 + 64,
.vdisplay = 2340,
.vsync_start = 2340 + 64,
.vsync_end = 2340 + 64 + 20,
.vtotal = 2340 + 64 + 20 + 64,
.width_mm = 68,
.height_mm = 147,
.type = DRM_MODE_TYPE_DRIVER,
};
static int ams639rq08_get_modes(struct drm_panel *panel,
struct drm_connector *connector)
{
return drm_connector_helper_get_modes_fixed(connector, &ams639rq08_mode);
}
static const struct drm_panel_funcs ams639rq08_panel_funcs = {
.prepare = ams639rq08_prepare,
.unprepare = ams639rq08_unprepare,
.get_modes = ams639rq08_get_modes,
};
static int ams639rq08_bl_update_status(struct backlight_device *bl)
{
struct mipi_dsi_device *dsi = bl_get_data(bl);
u16 brightness = backlight_get_brightness(bl);
int ret;
dsi->mode_flags &= ~MIPI_DSI_MODE_LPM;
ret = mipi_dsi_dcs_set_display_brightness_large(dsi, brightness);
if (ret < 0)
return ret;
dsi->mode_flags |= MIPI_DSI_MODE_LPM;
return 0;
}
static int ams639rq08_bl_get_brightness(struct backlight_device *bl)
{
struct mipi_dsi_device *dsi = bl_get_data(bl);
u16 brightness;
int ret;
dsi->mode_flags &= ~MIPI_DSI_MODE_LPM;
ret = mipi_dsi_dcs_get_display_brightness_large(dsi, &brightness);
if (ret < 0)
return ret;
dsi->mode_flags |= MIPI_DSI_MODE_LPM;
return brightness;
}
static const struct backlight_ops ams639rq08_bl_ops = {
.update_status = ams639rq08_bl_update_status,
.get_brightness = ams639rq08_bl_get_brightness,
};
static struct backlight_device *
ams639rq08_create_backlight(struct mipi_dsi_device *dsi)
{
struct device *dev = &dsi->dev;
const struct backlight_properties props = {
.type = BACKLIGHT_RAW,
.brightness = 1023,
.max_brightness = 2047,
};
return devm_backlight_device_register(dev, dev_name(dev), dev, dsi,
&ams639rq08_bl_ops, &props);
}
static int ams639rq08_probe(struct mipi_dsi_device *dsi)
{
struct device *dev = &dsi->dev;
struct ams639rq08 *ctx;
int ret;
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ret = devm_regulator_bulk_get_const(&dsi->dev,
ARRAY_SIZE(ams639rq08_supplies),
ams639rq08_supplies,
&ctx->supplies);
if (ret < 0)
return dev_err_probe(dev, ret, "Failed to get regulators\n");
ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(ctx->reset_gpio))
return dev_err_probe(dev, PTR_ERR(ctx->reset_gpio),
"Failed to get reset-gpios\n");
ctx->dsi = dsi;
mipi_dsi_set_drvdata(dsi, ctx);
dsi->lanes = 4;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_CLOCK_NON_CONTINUOUS | MIPI_DSI_MODE_LPM;
drm_panel_init(&ctx->panel, dev, &ams639rq08_panel_funcs,
DRM_MODE_CONNECTOR_DSI);
ctx->panel.prepare_prev_first = true;
ctx->panel.backlight = ams639rq08_create_backlight(dsi);
if (IS_ERR(ctx->panel.backlight))
return dev_err_probe(dev, PTR_ERR(ctx->panel.backlight),
"Failed to create backlight\n");
drm_panel_add(&ctx->panel);
ret = devm_mipi_dsi_attach(dev, dsi);
if (ret < 0) {
drm_panel_remove(&ctx->panel);
return dev_err_probe(dev, ret, "Failed to attach to DSI host\n");
}
return 0;
}
static void ams639rq08_remove(struct mipi_dsi_device *dsi)
{
struct ams639rq08 *ctx = mipi_dsi_get_drvdata(dsi);
drm_panel_remove(&ctx->panel);
}
static const struct of_device_id ams639rq08_of_match[] = {
{ .compatible = "samsung,ams639rq08" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, ams639rq08_of_match);
static struct mipi_dsi_driver ams639rq08_driver = {
.probe = ams639rq08_probe,
.remove = ams639rq08_remove,
.driver = {
.name = "panel-samsung-ams639rq08",
.of_match_table = ams639rq08_of_match,
},
};
module_mipi_dsi_driver(ams639rq08_driver);
MODULE_AUTHOR("Danila Tikhonov <danila@jiaxyga.com>");
MODULE_DESCRIPTION("DRM driver for SAMSUNG AMS639RQ08 cmd mode dsi panel");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,342 @@
// SPDX-License-Identifier: GPL-2.0-only
//
// Generated with linux-mdss-dsi-panel-driver-generator from vendor device tree:
// Copyright (c) 2013, The Linux Foundation. All rights reserved.
// Copyright (c) 2024 Dzmitry Sankouski <dsankouski@gmail.com>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/regulator/consumer.h>
#include <drm/display/drm_dsc.h>
#include <drm/display/drm_dsc_helper.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_panel.h>
struct s6e3ha8 {
struct drm_panel panel;
struct mipi_dsi_device *dsi;
struct drm_dsc_config dsc;
struct gpio_desc *reset_gpio;
struct regulator_bulk_data *supplies;
};
const struct regulator_bulk_data s6e3ha8_supplies[] = {
{ .supply = "vdd3" },
{ .supply = "vci" },
{ .supply = "vddr" },
};
static inline
struct s6e3ha8 *to_s6e3ha8_amb577px01_wqhd(struct drm_panel *panel)
{
return container_of(panel, struct s6e3ha8, panel);
}
#define s6e3ha8_test_key_on_lvl2(ctx) \
mipi_dsi_dcs_write_seq_multi(ctx, 0xf0, 0x5a, 0x5a)
#define s6e3ha8_test_key_off_lvl2(ctx) \
mipi_dsi_dcs_write_seq_multi(ctx, 0xf0, 0xa5, 0xa5)
#define s6e3ha8_test_key_on_lvl3(ctx) \
mipi_dsi_dcs_write_seq_multi(ctx, 0xfc, 0x5a, 0x5a)
#define s6e3ha8_test_key_off_lvl3(ctx) \
mipi_dsi_dcs_write_seq_multi(ctx, 0xfc, 0xa5, 0xa5)
#define s6e3ha8_test_key_on_lvl1(ctx) \
mipi_dsi_dcs_write_seq_multi(ctx, 0x9f, 0xa5, 0xa5)
#define s6e3ha8_test_key_off_lvl1(ctx) \
mipi_dsi_dcs_write_seq_multi(ctx, 0x9f, 0x5a, 0x5a)
#define s6e3ha8_afc_off(ctx) \
mipi_dsi_dcs_write_seq_multi(ctx, 0xe2, 0x00, 0x00)
static void s6e3ha8_amb577px01_wqhd_reset(struct s6e3ha8 *priv)
{
gpiod_set_value_cansleep(priv->reset_gpio, 1);
usleep_range(5000, 6000);
gpiod_set_value_cansleep(priv->reset_gpio, 0);
usleep_range(5000, 6000);
gpiod_set_value_cansleep(priv->reset_gpio, 1);
usleep_range(5000, 6000);
}
static int s6e3ha8_amb577px01_wqhd_on(struct s6e3ha8 *priv)
{
struct mipi_dsi_device *dsi = priv->dsi;
struct mipi_dsi_multi_context ctx = { .dsi = dsi };
dsi->mode_flags |= MIPI_DSI_MODE_LPM;
s6e3ha8_test_key_on_lvl1(&ctx);
s6e3ha8_test_key_on_lvl2(&ctx);
mipi_dsi_compression_mode_multi(&ctx, true);
s6e3ha8_test_key_off_lvl2(&ctx);
mipi_dsi_dcs_exit_sleep_mode_multi(&ctx);
usleep_range(5000, 6000);
s6e3ha8_test_key_on_lvl2(&ctx);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xf2, 0x13);
s6e3ha8_test_key_off_lvl2(&ctx);
usleep_range(10000, 11000);
s6e3ha8_test_key_on_lvl2(&ctx);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xf2, 0x13);
s6e3ha8_test_key_off_lvl2(&ctx);
/* OMOK setting 1 (Initial setting) - Scaler Latch Setting Guide */
s6e3ha8_test_key_on_lvl2(&ctx);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x07);
/* latch setting 1 : Scaler on/off & address setting & PPS setting -> Image update latch */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xf2, 0x3c, 0x10);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x0b);
/* latch setting 2 : Ratio change mode -> Image update latch */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xf2, 0x30);
/* OMOK setting 2 - Seamless setting guide : WQHD */
mipi_dsi_dcs_write_seq_multi(&ctx, 0x2a, 0x00, 0x00, 0x05, 0x9f); /* CASET */
mipi_dsi_dcs_write_seq_multi(&ctx, 0x2b, 0x00, 0x00, 0x0b, 0x8f); /* PASET */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xba, 0x01); /* scaler setup : scaler off */
s6e3ha8_test_key_off_lvl2(&ctx);
mipi_dsi_dcs_write_seq_multi(&ctx, 0x35, 0x00); /* TE Vsync ON */
s6e3ha8_test_key_on_lvl2(&ctx);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xed, 0x4c); /* ERR_FG */
s6e3ha8_test_key_off_lvl2(&ctx);
s6e3ha8_test_key_on_lvl3(&ctx);
/* FFC Setting 897.6Mbps */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xc5, 0x0d, 0x10, 0xb4, 0x3e, 0x01);
s6e3ha8_test_key_off_lvl3(&ctx);
s6e3ha8_test_key_on_lvl2(&ctx);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9,
0x00, 0xb0, 0x81, 0x09, 0x00, 0x00, 0x00,
0x11, 0x03); /* TSP HSYNC Setting */
s6e3ha8_test_key_off_lvl2(&ctx);
s6e3ha8_test_key_on_lvl2(&ctx);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x03);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xf6, 0x43);
s6e3ha8_test_key_off_lvl2(&ctx);
s6e3ha8_test_key_on_lvl2(&ctx);
/* Brightness condition set */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xca,
0x07, 0x00, 0x00, 0x00, 0x80, 0x80, 0x80,
0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
0x80, 0x80, 0x80, 0x00, 0x00, 0x00);
mipi_dsi_dcs_write_seq_multi(&ctx, 0xb1, 0x00, 0x0c); /* AID Set : 0% */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xb5,
0x19, 0xdc, 0x16, 0x01, 0x34, 0x67, 0x9a,
0xcd, 0x01, 0x22, 0x33, 0x44, 0x00, 0x00,
0x05, 0x55, 0xcc, 0x0c, 0x01, 0x11, 0x11,
0x10); /* MPS/ELVSS Setting */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xf4, 0xeb, 0x28); /* VINT */
mipi_dsi_dcs_write_seq_multi(&ctx, 0xf7, 0x03); /* Gamma, LTPS(AID) update */
s6e3ha8_test_key_off_lvl2(&ctx);
s6e3ha8_test_key_off_lvl1(&ctx);
return ctx.accum_err;
}
static int s6e3ha8_enable(struct drm_panel *panel)
{
struct s6e3ha8 *priv = to_s6e3ha8_amb577px01_wqhd(panel);
struct mipi_dsi_device *dsi = priv->dsi;
struct mipi_dsi_multi_context ctx = { .dsi = dsi };
s6e3ha8_test_key_on_lvl1(&ctx);
mipi_dsi_dcs_set_display_on_multi(&ctx);
s6e3ha8_test_key_off_lvl1(&ctx);
return ctx.accum_err;
}
static int s6e3ha8_disable(struct drm_panel *panel)
{
struct s6e3ha8 *priv = to_s6e3ha8_amb577px01_wqhd(panel);
struct mipi_dsi_device *dsi = priv->dsi;
struct mipi_dsi_multi_context ctx = { .dsi = dsi };
s6e3ha8_test_key_on_lvl1(&ctx);
mipi_dsi_dcs_set_display_off_multi(&ctx);
s6e3ha8_test_key_off_lvl1(&ctx);
mipi_dsi_msleep(&ctx, 20);
s6e3ha8_test_key_on_lvl2(&ctx);
s6e3ha8_afc_off(&ctx);
s6e3ha8_test_key_off_lvl2(&ctx);
mipi_dsi_msleep(&ctx, 160);
return ctx.accum_err;
}
static int s6e3ha8_amb577px01_wqhd_prepare(struct drm_panel *panel)
{
struct s6e3ha8 *priv = to_s6e3ha8_amb577px01_wqhd(panel);
struct mipi_dsi_device *dsi = priv->dsi;
struct mipi_dsi_multi_context ctx = { .dsi = dsi };
struct drm_dsc_picture_parameter_set pps;
int ret;
ret = regulator_bulk_enable(ARRAY_SIZE(s6e3ha8_supplies), priv->supplies);
if (ret < 0)
return ret;
mipi_dsi_msleep(&ctx, 120);
s6e3ha8_amb577px01_wqhd_reset(priv);
ret = s6e3ha8_amb577px01_wqhd_on(priv);
if (ret < 0) {
gpiod_set_value_cansleep(priv->reset_gpio, 1);
goto err;
}
drm_dsc_pps_payload_pack(&pps, &priv->dsc);
s6e3ha8_test_key_on_lvl1(&ctx);
mipi_dsi_picture_parameter_set_multi(&ctx, &pps);
s6e3ha8_test_key_off_lvl1(&ctx);
mipi_dsi_msleep(&ctx, 28);
return ctx.accum_err;
err:
regulator_bulk_disable(ARRAY_SIZE(s6e3ha8_supplies), priv->supplies);
return ret;
}
static int s6e3ha8_amb577px01_wqhd_unprepare(struct drm_panel *panel)
{
struct s6e3ha8 *priv = to_s6e3ha8_amb577px01_wqhd(panel);
return regulator_bulk_disable(ARRAY_SIZE(s6e3ha8_supplies), priv->supplies);
}
static const struct drm_display_mode s6e3ha8_amb577px01_wqhd_mode = {
.clock = (1440 + 116 + 44 + 120) * (2960 + 120 + 80 + 124) * 60 / 1000,
.hdisplay = 1440,
.hsync_start = 1440 + 116,
.hsync_end = 1440 + 116 + 44,
.htotal = 1440 + 116 + 44 + 120,
.vdisplay = 2960,
.vsync_start = 2960 + 120,
.vsync_end = 2960 + 120 + 80,
.vtotal = 2960 + 120 + 80 + 124,
.width_mm = 64,
.height_mm = 132,
};
static int s6e3ha8_amb577px01_wqhd_get_modes(struct drm_panel *panel,
struct drm_connector *connector)
{
return drm_connector_helper_get_modes_fixed(connector, &s6e3ha8_amb577px01_wqhd_mode);
}
static const struct drm_panel_funcs s6e3ha8_amb577px01_wqhd_panel_funcs = {
.prepare = s6e3ha8_amb577px01_wqhd_prepare,
.unprepare = s6e3ha8_amb577px01_wqhd_unprepare,
.get_modes = s6e3ha8_amb577px01_wqhd_get_modes,
.enable = s6e3ha8_enable,
.disable = s6e3ha8_disable,
};
static int s6e3ha8_amb577px01_wqhd_probe(struct mipi_dsi_device *dsi)
{
struct device *dev = &dsi->dev;
struct s6e3ha8 *priv;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
ret = devm_regulator_bulk_get_const(dev, ARRAY_SIZE(s6e3ha8_supplies),
s6e3ha8_supplies,
&priv->supplies);
if (ret < 0) {
dev_err(dev, "failed to get regulators: %d\n", ret);
return ret;
}
priv->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(priv->reset_gpio))
return dev_err_probe(dev, PTR_ERR(priv->reset_gpio),
"Failed to get reset-gpios\n");
priv->dsi = dsi;
mipi_dsi_set_drvdata(dsi, priv);
dsi->lanes = 4;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
MIPI_DSI_MODE_VIDEO_NO_HFP | MIPI_DSI_MODE_VIDEO_NO_HBP |
MIPI_DSI_MODE_VIDEO_NO_HSA | MIPI_DSI_MODE_NO_EOT_PACKET;
drm_panel_init(&priv->panel, dev, &s6e3ha8_amb577px01_wqhd_panel_funcs,
DRM_MODE_CONNECTOR_DSI);
priv->panel.prepare_prev_first = true;
drm_panel_add(&priv->panel);
/* This panel only supports DSC; unconditionally enable it */
dsi->dsc = &priv->dsc;
priv->dsc.dsc_version_major = 1;
priv->dsc.dsc_version_minor = 1;
priv->dsc.slice_height = 40;
priv->dsc.slice_width = 720;
WARN_ON(1440 % priv->dsc.slice_width);
priv->dsc.slice_count = 1440 / priv->dsc.slice_width;
priv->dsc.bits_per_component = 8;
priv->dsc.bits_per_pixel = 8 << 4; /* 4 fractional bits */
priv->dsc.block_pred_enable = true;
ret = mipi_dsi_attach(dsi);
if (ret < 0) {
dev_err(dev, "Failed to attach to DSI host: %d\n", ret);
drm_panel_remove(&priv->panel);
return ret;
}
return 0;
}
static void s6e3ha8_amb577px01_wqhd_remove(struct mipi_dsi_device *dsi)
{
struct s6e3ha8 *priv = mipi_dsi_get_drvdata(dsi);
int ret;
ret = mipi_dsi_detach(dsi);
if (ret < 0)
dev_err(&dsi->dev, "Failed to detach from DSI host: %d\n", ret);
drm_panel_remove(&priv->panel);
}
static const struct of_device_id s6e3ha8_amb577px01_wqhd_of_match[] = {
{ .compatible = "samsung,s6e3ha8" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, s6e3ha8_amb577px01_wqhd_of_match);
static struct mipi_dsi_driver s6e3ha8_amb577px01_wqhd_driver = {
.probe = s6e3ha8_amb577px01_wqhd_probe,
.remove = s6e3ha8_amb577px01_wqhd_remove,
.driver = {
.name = "panel-s6e3ha8",
.of_match_table = s6e3ha8_amb577px01_wqhd_of_match,
},
};
module_mipi_dsi_driver(s6e3ha8_amb577px01_wqhd_driver);
MODULE_AUTHOR("Dzmitry Sankouski <dsankouski@gmail.com>");
MODULE_DESCRIPTION("DRM driver for S6E3HA8 panel");
MODULE_LICENSE("GPL");

View File

@ -62,14 +62,20 @@ static void panthor_devfreq_update_utilization(struct panthor_devfreq *pdevfreq)
static int panthor_devfreq_target(struct device *dev, unsigned long *freq,
u32 flags)
{
struct panthor_device *ptdev = dev_get_drvdata(dev);
struct dev_pm_opp *opp;
int err;
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp))
return PTR_ERR(opp);
dev_pm_opp_put(opp);
return dev_pm_opp_set_rate(dev, *freq);
err = dev_pm_opp_set_rate(dev, *freq);
if (!err)
ptdev->current_frequency = *freq;
return err;
}
static void panthor_devfreq_reset(struct panthor_devfreq *pdevfreq)
@ -130,6 +136,7 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
struct panthor_devfreq *pdevfreq;
struct dev_pm_opp *opp;
unsigned long cur_freq;
unsigned long freq = ULONG_MAX;
int ret;
pdevfreq = drmm_kzalloc(&ptdev->base, sizeof(*ptdev->devfreq), GFP_KERNEL);
@ -161,6 +168,7 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
return PTR_ERR(opp);
panthor_devfreq_profile.initial_freq = cur_freq;
ptdev->current_frequency = cur_freq;
/* Regulator coupling only takes care of synchronizing/balancing voltage
* updates, but the coupled regulator needs to be enabled manually.
@ -204,6 +212,14 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
dev_pm_opp_put(opp);
/* Find the fastest defined rate */
opp = dev_pm_opp_find_freq_floor(dev, &freq);
if (IS_ERR(opp))
return PTR_ERR(opp);
ptdev->fast_rate = freq;
dev_pm_opp_put(opp);
/*
* Setup default thresholds for the simple_ondemand governor.
* The values are chosen based on experiments.

View File

@ -66,6 +66,25 @@ struct panthor_irq {
atomic_t suspended;
};
/**
* enum panthor_device_profiling_mode - Profiling state
*/
enum panthor_device_profiling_flags {
/** @PANTHOR_DEVICE_PROFILING_DISABLED: Profiling is disabled. */
PANTHOR_DEVICE_PROFILING_DISABLED = 0,
/** @PANTHOR_DEVICE_PROFILING_CYCLES: Sampling job cycles. */
PANTHOR_DEVICE_PROFILING_CYCLES = BIT(0),
/** @PANTHOR_DEVICE_PROFILING_TIMESTAMP: Sampling job timestamp. */
PANTHOR_DEVICE_PROFILING_TIMESTAMP = BIT(1),
/** @PANTHOR_DEVICE_PROFILING_ALL: Sampling everything. */
PANTHOR_DEVICE_PROFILING_ALL =
PANTHOR_DEVICE_PROFILING_CYCLES |
PANTHOR_DEVICE_PROFILING_TIMESTAMP,
};
/**
* struct panthor_device - Panthor device
*/
@ -162,6 +181,20 @@ struct panthor_device {
*/
struct page *dummy_latest_flush;
} pm;
/** @profile_mask: User-set profiling flags for job accounting. */
u32 profile_mask;
/** @current_frequency: Device clock frequency at present. Set by DVFS*/
unsigned long current_frequency;
/** @fast_rate: Maximum device clock frequency. Set by DVFS */
unsigned long fast_rate;
};
struct panthor_gpu_usage {
u64 time;
u64 cycles;
};
/**
@ -176,6 +209,9 @@ struct panthor_file {
/** @groups: Scheduling group pool attached to this file. */
struct panthor_group_pool *groups;
/** @stats: cycle and timestamp measures for job execution. */
struct panthor_gpu_usage stats;
};
int panthor_device_init(struct panthor_device *ptdev);

View File

@ -13,6 +13,7 @@
#include <linux/pagemap.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/time64.h>
#include <drm/drm_auth.h>
#include <drm/drm_debugfs.h>
@ -1435,6 +1436,37 @@ static int panthor_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
static void panthor_gpu_show_fdinfo(struct panthor_device *ptdev,
struct panthor_file *pfile,
struct drm_printer *p)
{
if (ptdev->profile_mask & PANTHOR_DEVICE_PROFILING_ALL)
panthor_fdinfo_gather_group_samples(pfile);
if (ptdev->profile_mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP) {
#ifdef CONFIG_ARM_ARCH_TIMER
drm_printf(p, "drm-engine-panthor:\t%llu ns\n",
DIV_ROUND_UP_ULL((pfile->stats.time * NSEC_PER_SEC),
arch_timer_get_cntfrq()));
#endif
}
if (ptdev->profile_mask & PANTHOR_DEVICE_PROFILING_CYCLES)
drm_printf(p, "drm-cycles-panthor:\t%llu\n", pfile->stats.cycles);
drm_printf(p, "drm-maxfreq-panthor:\t%lu Hz\n", ptdev->fast_rate);
drm_printf(p, "drm-curfreq-panthor:\t%lu Hz\n", ptdev->current_frequency);
}
static void panthor_show_fdinfo(struct drm_printer *p, struct drm_file *file)
{
struct drm_device *dev = file->minor->dev;
struct panthor_device *ptdev = container_of(dev, struct panthor_device, base);
panthor_gpu_show_fdinfo(ptdev, file->driver_priv, p);
drm_show_memory_stats(p, file);
}
static const struct file_operations panthor_drm_driver_fops = {
.open = drm_open,
.release = drm_release,
@ -1444,6 +1476,7 @@ static const struct file_operations panthor_drm_driver_fops = {
.read = drm_read,
.llseek = noop_llseek,
.mmap = panthor_mmap,
.show_fdinfo = drm_show_fdinfo,
.fop_flags = FOP_UNSIGNED_OFFSET,
};
@ -1466,6 +1499,7 @@ static const struct drm_driver panthor_drm_driver = {
DRIVER_SYNCOBJ_TIMELINE | DRIVER_GEM_GPUVA,
.open = panthor_open,
.postclose = panthor_postclose,
.show_fdinfo = panthor_show_fdinfo,
.ioctls = panthor_drm_driver_ioctls,
.num_ioctls = ARRAY_SIZE(panthor_drm_driver_ioctls),
.fops = &panthor_drm_driver_fops,
@ -1503,6 +1537,44 @@ static void panthor_remove(struct platform_device *pdev)
panthor_device_unplug(ptdev);
}
static ssize_t profiling_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct panthor_device *ptdev = dev_get_drvdata(dev);
return sysfs_emit(buf, "%d\n", ptdev->profile_mask);
}
static ssize_t profiling_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
{
struct panthor_device *ptdev = dev_get_drvdata(dev);
u32 value;
int err;
err = kstrtou32(buf, 0, &value);
if (err)
return err;
if ((value & ~PANTHOR_DEVICE_PROFILING_ALL) != 0)
return -EINVAL;
ptdev->profile_mask = value;
return len;
}
static DEVICE_ATTR_RW(profiling);
static struct attribute *panthor_attrs[] = {
&dev_attr_profiling.attr,
NULL,
};
ATTRIBUTE_GROUPS(panthor);
static const struct of_device_id dt_match[] = {
{ .compatible = "rockchip,rk3588-mali" },
{ .compatible = "arm,mali-valhall-csf" },
@ -1522,6 +1594,7 @@ static struct platform_driver panthor_driver = {
.name = "panthor",
.pm = pm_ptr(&panthor_pm_ops),
.of_match_table = dt_match,
.dev_groups = panthor_groups,
},
};

View File

@ -145,6 +145,17 @@ panthor_gem_prime_export(struct drm_gem_object *obj, int flags)
return drm_gem_prime_export(obj, flags);
}
static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj)
{
struct panthor_gem_object *bo = to_panthor_bo(obj);
enum drm_gem_object_status res = 0;
if (bo->base.base.import_attach || bo->base.pages)
res |= DRM_GEM_OBJECT_RESIDENT;
return res;
}
static const struct drm_gem_object_funcs panthor_gem_funcs = {
.free = panthor_gem_free_object,
.print_info = drm_gem_shmem_object_print_info,
@ -154,6 +165,7 @@ static const struct drm_gem_object_funcs panthor_gem_funcs = {
.vmap = drm_gem_shmem_object_vmap,
.vunmap = drm_gem_shmem_object_vunmap,
.mmap = panthor_gem_mmap,
.status = panthor_gem_status,
.export = panthor_gem_prime_export,
.vm_ops = &drm_gem_shmem_vm_ops,
};

View File

@ -93,6 +93,9 @@
#define MIN_CSGS 3
#define MAX_CSG_PRIO 0xf
#define NUM_INSTRS_PER_CACHE_LINE (64 / sizeof(u64))
#define MAX_INSTRS_PER_JOB 24
struct panthor_group;
/**
@ -474,6 +477,18 @@ struct panthor_queue {
*/
struct list_head in_flight_jobs;
} fence_ctx;
/** @profiling: Job profiling data slots and access information. */
struct {
/** @slots: Kernel BO holding the slots. */
struct panthor_kernel_bo *slots;
/** @slot_count: Number of jobs ringbuffer can hold at once. */
u32 slot_count;
/** @seqno: Index of the next available profiling information slot. */
u32 seqno;
} profiling;
};
/**
@ -602,6 +617,18 @@ struct panthor_group {
*/
struct panthor_kernel_bo *syncobjs;
/** @fdinfo: Per-file total cycle and timestamp values reference. */
struct {
/** @data: Total sampled values for jobs in queues from this group. */
struct panthor_gpu_usage data;
/**
* @lock: Mutex to govern concurrent access from drm file's fdinfo callback
* and job post-completion processing function
*/
struct mutex lock;
} fdinfo;
/** @state: Group state. */
enum panthor_group_state state;
@ -659,6 +686,18 @@ struct panthor_group {
struct list_head wait_node;
};
struct panthor_job_profiling_data {
struct {
u64 before;
u64 after;
} cycles;
struct {
u64 before;
u64 after;
} time;
};
/**
* group_queue_work() - Queue a group work
* @group: Group to queue the work for.
@ -772,6 +811,15 @@ struct panthor_job {
/** @done_fence: Fence signaled when the job is finished or cancelled. */
struct dma_fence *done_fence;
/** @profiling: Job profiling information. */
struct {
/** @mask: Current device job profiling enablement bitmask. */
u32 mask;
/** @slot: Job index in the profiling slots BO. */
u32 slot;
} profiling;
};
static void
@ -836,6 +884,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
panthor_kernel_bo_destroy(queue->ringbuf);
panthor_kernel_bo_destroy(queue->iface.mem);
panthor_kernel_bo_destroy(queue->profiling.slots);
/* Release the last_fence we were holding, if any. */
dma_fence_put(queue->fence_ctx.last_fence);
@ -850,6 +899,8 @@ static void group_release_work(struct work_struct *work)
release_work);
u32 i;
mutex_destroy(&group->fdinfo.lock);
for (i = 0; i < group->queue_count; i++)
group_free_queue(group, group->queues[i]);
@ -1986,8 +2037,6 @@ tick_ctx_init(struct panthor_scheduler *sched,
}
}
#define NUM_INSTRS_PER_SLOT 16
static void
group_term_post_processing(struct panthor_group *group)
{
@ -2781,6 +2830,41 @@ void panthor_sched_post_reset(struct panthor_device *ptdev, bool reset_failed)
}
}
static void update_fdinfo_stats(struct panthor_job *job)
{
struct panthor_group *group = job->group;
struct panthor_queue *queue = group->queues[job->queue_idx];
struct panthor_gpu_usage *fdinfo = &group->fdinfo.data;
struct panthor_job_profiling_data *slots = queue->profiling.slots->kmap;
struct panthor_job_profiling_data *data = &slots[job->profiling.slot];
mutex_lock(&group->fdinfo.lock);
if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES)
fdinfo->cycles += data->cycles.after - data->cycles.before;
if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP)
fdinfo->time += data->time.after - data->time.before;
mutex_unlock(&group->fdinfo.lock);
}
void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
{
struct panthor_group_pool *gpool = pfile->groups;
struct panthor_group *group;
unsigned long i;
if (IS_ERR_OR_NULL(gpool))
return;
xa_for_each(&gpool->xa, i, group) {
mutex_lock(&group->fdinfo.lock);
pfile->stats.cycles += group->fdinfo.data.cycles;
pfile->stats.time += group->fdinfo.data.time;
group->fdinfo.data.cycles = 0;
group->fdinfo.data.time = 0;
mutex_unlock(&group->fdinfo.lock);
}
}
static void group_sync_upd_work(struct work_struct *work)
{
struct panthor_group *group =
@ -2813,6 +2897,8 @@ static void group_sync_upd_work(struct work_struct *work)
dma_fence_end_signalling(cookie);
list_for_each_entry_safe(job, job_tmp, &done_jobs, node) {
if (job->profiling.mask)
update_fdinfo_stats(job);
list_del_init(&job->node);
panthor_job_put(&job->base);
}
@ -2820,6 +2906,186 @@ static void group_sync_upd_work(struct work_struct *work)
group_put(group);
}
struct panthor_job_ringbuf_instrs {
u64 buffer[MAX_INSTRS_PER_JOB];
u32 count;
};
struct panthor_job_instr {
u32 profile_mask;
u64 instr;
};
#define JOB_INSTR(__prof, __instr) \
{ \
.profile_mask = __prof, \
.instr = __instr, \
}
static void
copy_instrs_to_ringbuf(struct panthor_queue *queue,
struct panthor_job *job,
struct panthor_job_ringbuf_instrs *instrs)
{
u64 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
u64 start = job->ringbuf.start & (ringbuf_size - 1);
u64 size, written;
/*
* We need to write a whole slot, including any trailing zeroes
* that may come at the end of it. Also, because instrs.buffer has
* been zero-initialised, there's no need to pad it with 0's
*/
instrs->count = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
size = instrs->count * sizeof(u64);
WARN_ON(size > ringbuf_size);
written = min(ringbuf_size - start, size);
memcpy(queue->ringbuf->kmap + start, instrs->buffer, written);
if (written < size)
memcpy(queue->ringbuf->kmap,
&instrs->buffer[written / sizeof(u64)],
size - written);
}
struct panthor_job_cs_params {
u32 profile_mask;
u64 addr_reg; u64 val_reg;
u64 cycle_reg; u64 time_reg;
u64 sync_addr; u64 times_addr;
u64 cs_start; u64 cs_size;
u32 last_flush; u32 waitall_mask;
};
static void
get_job_cs_params(struct panthor_job *job, struct panthor_job_cs_params *params)
{
struct panthor_group *group = job->group;
struct panthor_queue *queue = group->queues[job->queue_idx];
struct panthor_device *ptdev = group->ptdev;
struct panthor_scheduler *sched = ptdev->scheduler;
params->addr_reg = ptdev->csif_info.cs_reg_count -
ptdev->csif_info.unpreserved_cs_reg_count;
params->val_reg = params->addr_reg + 2;
params->cycle_reg = params->addr_reg;
params->time_reg = params->val_reg;
params->sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
job->queue_idx * sizeof(struct panthor_syncobj_64b);
params->times_addr = panthor_kernel_bo_gpuva(queue->profiling.slots) +
(job->profiling.slot * sizeof(struct panthor_job_profiling_data));
params->waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
params->cs_start = job->call_info.start;
params->cs_size = job->call_info.size;
params->last_flush = job->call_info.latest_flush;
params->profile_mask = job->profiling.mask;
}
#define JOB_INSTR_ALWAYS(instr) \
JOB_INSTR(PANTHOR_DEVICE_PROFILING_DISABLED, (instr))
#define JOB_INSTR_TIMESTAMP(instr) \
JOB_INSTR(PANTHOR_DEVICE_PROFILING_TIMESTAMP, (instr))
#define JOB_INSTR_CYCLES(instr) \
JOB_INSTR(PANTHOR_DEVICE_PROFILING_CYCLES, (instr))
static void
prepare_job_instrs(const struct panthor_job_cs_params *params,
struct panthor_job_ringbuf_instrs *instrs)
{
const struct panthor_job_instr instr_seq[] = {
/* MOV32 rX+2, cs.latest_flush */
JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->last_flush),
/* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
JOB_INSTR_ALWAYS((36ull << 56) | (0ull << 48) | (params->val_reg << 40) |
(0 << 16) | 0x233),
/* MOV48 rX:rX+1, cycles_offset */
JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
(params->times_addr +
offsetof(struct panthor_job_profiling_data, cycles.before))),
/* STORE_STATE cycles */
JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
/* MOV48 rX:rX+1, time_offset */
JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
(params->times_addr +
offsetof(struct panthor_job_profiling_data, time.before))),
/* STORE_STATE timer */
JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
/* MOV48 rX:rX+1, cs.start */
JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->cs_start),
/* MOV32 rX+2, cs.size */
JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->cs_size),
/* WAIT(0) => waits for FLUSH_CACHE2 instruction */
JOB_INSTR_ALWAYS((3ull << 56) | (1 << 16)),
/* CALL rX:rX+1, rX+2 */
JOB_INSTR_ALWAYS((32ull << 56) | (params->addr_reg << 40) |
(params->val_reg << 32)),
/* MOV48 rX:rX+1, cycles_offset */
JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
(params->times_addr +
offsetof(struct panthor_job_profiling_data, cycles.after))),
/* STORE_STATE cycles */
JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
/* MOV48 rX:rX+1, time_offset */
JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
(params->times_addr +
offsetof(struct panthor_job_profiling_data, time.after))),
/* STORE_STATE timer */
JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
/* MOV48 rX:rX+1, sync_addr */
JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->sync_addr),
/* MOV48 rX+2, #1 */
JOB_INSTR_ALWAYS((1ull << 56) | (params->val_reg << 48) | 1),
/* WAIT(all) */
JOB_INSTR_ALWAYS((3ull << 56) | (params->waitall_mask << 16)),
/* SYNC_ADD64.system_scope.propage_err.nowait rX:rX+1, rX+2*/
JOB_INSTR_ALWAYS((51ull << 56) | (0ull << 48) | (params->addr_reg << 40) |
(params->val_reg << 32) | (0 << 16) | 1),
/* ERROR_BARRIER, so we can recover from faults at job boundaries. */
JOB_INSTR_ALWAYS((47ull << 56)),
};
u32 pad;
instrs->count = 0;
/* NEED to be cacheline aligned to please the prefetcher. */
static_assert(sizeof(instrs->buffer) % 64 == 0,
"panthor_job_ringbuf_instrs::buffer is not aligned on a cacheline");
/* Make sure we have enough storage to store the whole sequence. */
static_assert(ALIGN(ARRAY_SIZE(instr_seq), NUM_INSTRS_PER_CACHE_LINE) ==
ARRAY_SIZE(instrs->buffer),
"instr_seq vs panthor_job_ringbuf_instrs::buffer size mismatch");
for (u32 i = 0; i < ARRAY_SIZE(instr_seq); i++) {
/* If the profile mask of this instruction is not enabled, skip it. */
if (instr_seq[i].profile_mask &&
!(instr_seq[i].profile_mask & params->profile_mask))
continue;
instrs->buffer[instrs->count++] = instr_seq[i].instr;
}
pad = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
memset(&instrs->buffer[instrs->count], 0,
(pad - instrs->count) * sizeof(instrs->buffer[0]));
instrs->count = pad;
}
static u32 calc_job_credits(u32 profile_mask)
{
struct panthor_job_ringbuf_instrs instrs;
struct panthor_job_cs_params params = {
.profile_mask = profile_mask,
};
prepare_job_instrs(&params, &instrs);
return instrs.count;
}
static struct dma_fence *
queue_run_job(struct drm_sched_job *sched_job)
{
@ -2828,58 +3094,11 @@ queue_run_job(struct drm_sched_job *sched_job)
struct panthor_queue *queue = group->queues[job->queue_idx];
struct panthor_device *ptdev = group->ptdev;
struct panthor_scheduler *sched = ptdev->scheduler;
u32 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
u32 ringbuf_insert = queue->iface.input->insert & (ringbuf_size - 1);
u64 addr_reg = ptdev->csif_info.cs_reg_count -
ptdev->csif_info.unpreserved_cs_reg_count;
u64 val_reg = addr_reg + 2;
u64 sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
job->queue_idx * sizeof(struct panthor_syncobj_64b);
u32 waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
struct panthor_job_ringbuf_instrs instrs;
struct panthor_job_cs_params cs_params;
struct dma_fence *done_fence;
int ret;
u64 call_instrs[NUM_INSTRS_PER_SLOT] = {
/* MOV32 rX+2, cs.latest_flush */
(2ull << 56) | (val_reg << 48) | job->call_info.latest_flush,
/* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
(36ull << 56) | (0ull << 48) | (val_reg << 40) | (0 << 16) | 0x233,
/* MOV48 rX:rX+1, cs.start */
(1ull << 56) | (addr_reg << 48) | job->call_info.start,
/* MOV32 rX+2, cs.size */
(2ull << 56) | (val_reg << 48) | job->call_info.size,
/* WAIT(0) => waits for FLUSH_CACHE2 instruction */
(3ull << 56) | (1 << 16),
/* CALL rX:rX+1, rX+2 */
(32ull << 56) | (addr_reg << 40) | (val_reg << 32),
/* MOV48 rX:rX+1, sync_addr */
(1ull << 56) | (addr_reg << 48) | sync_addr,
/* MOV48 rX+2, #1 */
(1ull << 56) | (val_reg << 48) | 1,
/* WAIT(all) */
(3ull << 56) | (waitall_mask << 16),
/* SYNC_ADD64.system_scope.propage_err.nowait rX:rX+1, rX+2*/
(51ull << 56) | (0ull << 48) | (addr_reg << 40) | (val_reg << 32) | (0 << 16) | 1,
/* ERROR_BARRIER, so we can recover from faults at job
* boundaries.
*/
(47ull << 56),
};
/* Need to be cacheline aligned to please the prefetcher. */
static_assert(sizeof(call_instrs) % 64 == 0,
"call_instrs is not aligned on a cacheline");
/* Stream size is zero, nothing to do except making sure all previously
* submitted jobs are done before we signal the
* drm_sched_job::s_fence::finished fence.
@ -2905,17 +3124,23 @@ queue_run_job(struct drm_sched_job *sched_job)
queue->fence_ctx.id,
atomic64_inc_return(&queue->fence_ctx.seqno));
memcpy(queue->ringbuf->kmap + ringbuf_insert,
call_instrs, sizeof(call_instrs));
job->profiling.slot = queue->profiling.seqno++;
if (queue->profiling.seqno == queue->profiling.slot_count)
queue->profiling.seqno = 0;
job->ringbuf.start = queue->iface.input->insert;
get_job_cs_params(job, &cs_params);
prepare_job_instrs(&cs_params, &instrs);
copy_instrs_to_ringbuf(queue, job, &instrs);
job->ringbuf.end = job->ringbuf.start + (instrs.count * sizeof(u64));
panthor_job_get(&job->base);
spin_lock(&queue->fence_ctx.lock);
list_add_tail(&job->node, &queue->fence_ctx.in_flight_jobs);
spin_unlock(&queue->fence_ctx.lock);
job->ringbuf.start = queue->iface.input->insert;
job->ringbuf.end = job->ringbuf.start + sizeof(call_instrs);
/* Make sure the ring buffer is updated before the INSERT
* register.
*/
@ -3008,6 +3233,33 @@ static const struct drm_sched_backend_ops panthor_queue_sched_ops = {
.free_job = queue_free_job,
};
static u32 calc_profiling_ringbuf_num_slots(struct panthor_device *ptdev,
u32 cs_ringbuf_size)
{
u32 min_profiled_job_instrs = U32_MAX;
u32 last_flag = fls(PANTHOR_DEVICE_PROFILING_ALL);
/*
* We want to calculate the minimum size of a profiled job's CS,
* because since they need additional instructions for the sampling
* of performance metrics, they might take up further slots in
* the queue's ringbuffer. This means we might not need as many job
* slots for keeping track of their profiling information. What we
* need is the maximum number of slots we should allocate to this end,
* which matches the maximum number of profiled jobs we can place
* simultaneously in the queue's ring buffer.
* That has to be calculated separately for every single job profiling
* flag, but not in the case job profiling is disabled, since unprofiled
* jobs don't need to keep track of this at all.
*/
for (u32 i = 0; i < last_flag; i++) {
min_profiled_job_instrs =
min(min_profiled_job_instrs, calc_job_credits(BIT(i)));
}
return DIV_ROUND_UP(cs_ringbuf_size, min_profiled_job_instrs * sizeof(u64));
}
static struct panthor_queue *
group_create_queue(struct panthor_group *group,
const struct drm_panthor_queue_create *args)
@ -3061,9 +3313,35 @@ group_create_queue(struct panthor_group *group,
goto err_free_queue;
}
queue->profiling.slot_count =
calc_profiling_ringbuf_num_slots(group->ptdev, args->ringbuf_size);
queue->profiling.slots =
panthor_kernel_bo_create(group->ptdev, group->vm,
queue->profiling.slot_count *
sizeof(struct panthor_job_profiling_data),
DRM_PANTHOR_BO_NO_MMAP,
DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC |
DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED,
PANTHOR_VM_KERNEL_AUTO_VA);
if (IS_ERR(queue->profiling.slots)) {
ret = PTR_ERR(queue->profiling.slots);
goto err_free_queue;
}
ret = panthor_kernel_bo_vmap(queue->profiling.slots);
if (ret)
goto err_free_queue;
/*
* Credit limit argument tells us the total number of instructions
* across all CS slots in the ringbuffer, with some jobs requiring
* twice as many as others, depending on their profiling status.
*/
ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
group->ptdev->scheduler->wq, 1,
args->ringbuf_size / (NUM_INSTRS_PER_SLOT * sizeof(u64)),
args->ringbuf_size / sizeof(u64),
0, msecs_to_jiffies(JOB_TIMEOUT_MS),
group->ptdev->reset.wq,
NULL, "panthor-queue", group->ptdev->base.dev);
@ -3204,6 +3482,8 @@ int panthor_group_create(struct panthor_file *pfile,
}
mutex_unlock(&sched->reset.lock);
mutex_init(&group->fdinfo.lock);
return gid;
err_put_group:
@ -3371,6 +3651,7 @@ panthor_job_create(struct panthor_file *pfile,
{
struct panthor_group_pool *gpool = pfile->groups;
struct panthor_job *job;
u32 credits;
int ret;
if (qsubmit->pad)
@ -3424,9 +3705,16 @@ panthor_job_create(struct panthor_file *pfile,
}
}
job->profiling.mask = pfile->ptdev->profile_mask;
credits = calc_job_credits(job->profiling.mask);
if (credits == 0) {
ret = -EINVAL;
goto err_put_job;
}
ret = drm_sched_job_init(&job->base,
&job->group->queues[job->queue_idx]->entity,
1, job->group);
credits, job->group);
if (ret)
goto err_put_job;

View File

@ -47,4 +47,6 @@ void panthor_sched_resume(struct panthor_device *ptdev);
void panthor_sched_report_mmu_fault(struct panthor_device *ptdev);
void panthor_sched_report_fw_events(struct panthor_device *ptdev, u32 events);
void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile);
#endif

View File

@ -308,11 +308,11 @@ static void ttm_bo_unreserve_pinned(struct kunit *test)
err = ttm_resource_alloc(bo, place, &res2);
KUNIT_ASSERT_EQ(test, err, 0);
KUNIT_ASSERT_EQ(test,
list_is_last(&res2->lru.link, &priv->ttm_dev->pinned), 1);
list_is_last(&res2->lru.link, &priv->ttm_dev->unevictable), 1);
ttm_bo_unreserve(bo);
KUNIT_ASSERT_EQ(test,
list_is_last(&res1->lru.link, &priv->ttm_dev->pinned), 1);
list_is_last(&res1->lru.link, &priv->ttm_dev->unevictable), 1);
ttm_resource_free(bo, &res1);
ttm_resource_free(bo, &res2);

View File

@ -164,18 +164,18 @@ static void ttm_resource_init_pinned(struct kunit *test)
res = kunit_kzalloc(test, sizeof(*res), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, res);
KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->pinned));
KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->unevictable));
dma_resv_lock(bo->base.resv, NULL);
ttm_bo_pin(bo);
ttm_resource_init(bo, place, res);
KUNIT_ASSERT_TRUE(test, list_is_singular(&bo->bdev->pinned));
KUNIT_ASSERT_TRUE(test, list_is_singular(&bo->bdev->unevictable));
ttm_bo_unpin(bo);
ttm_resource_fini(man, res);
dma_resv_unlock(bo->base.resv);
KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->pinned));
KUNIT_ASSERT_TRUE(test, list_empty(&bo->bdev->unevictable));
}
static void ttm_resource_fini_basic(struct kunit *test)

View File

@ -139,7 +139,7 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo,
goto out_err;
if (mem->mem_type != TTM_PL_SYSTEM) {
ret = ttm_tt_populate(bo->bdev, bo->ttm, ctx);
ret = ttm_bo_populate(bo, ctx);
if (ret)
goto out_err;
}
@ -594,7 +594,8 @@ void ttm_bo_pin(struct ttm_buffer_object *bo)
spin_lock(&bo->bdev->lru_lock);
if (bo->resource)
ttm_resource_del_bulk_move(bo->resource, bo);
++bo->pin_count;
if (!bo->pin_count++ && bo->resource)
ttm_resource_move_to_lru_tail(bo->resource);
spin_unlock(&bo->bdev->lru_lock);
}
EXPORT_SYMBOL(ttm_bo_pin);
@ -613,9 +614,10 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo)
return;
spin_lock(&bo->bdev->lru_lock);
--bo->pin_count;
if (bo->resource)
if (!--bo->pin_count && bo->resource) {
ttm_resource_add_bulk_move(bo->resource, bo);
ttm_resource_move_to_lru_tail(bo->resource);
}
spin_unlock(&bo->bdev->lru_lock);
}
EXPORT_SYMBOL(ttm_bo_unpin);
@ -1128,9 +1130,20 @@ ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo)
if (bo->bdev->funcs->swap_notify)
bo->bdev->funcs->swap_notify(bo);
if (ttm_tt_is_populated(bo->ttm))
if (ttm_tt_is_populated(bo->ttm)) {
spin_lock(&bo->bdev->lru_lock);
ttm_resource_del_bulk_move(bo->resource, bo);
spin_unlock(&bo->bdev->lru_lock);
ret = ttm_tt_swapout(bo->bdev, bo->ttm, swapout_walk->gfp_flags);
spin_lock(&bo->bdev->lru_lock);
if (ret)
ttm_resource_add_bulk_move(bo->resource, bo);
ttm_resource_move_to_lru_tail(bo->resource);
spin_unlock(&bo->bdev->lru_lock);
}
out:
/* Consider -ENOMEM and -ENOSPC non-fatal. */
if (ret == -ENOMEM || ret == -ENOSPC)
@ -1180,3 +1193,47 @@ void ttm_bo_tt_destroy(struct ttm_buffer_object *bo)
ttm_tt_destroy(bo->bdev, bo->ttm);
bo->ttm = NULL;
}
/**
* ttm_bo_populate() - Ensure that a buffer object has backing pages
* @bo: The buffer object
* @ctx: The ttm_operation_ctx governing the operation.
*
* For buffer objects in a memory type whose manager uses
* struct ttm_tt for backing pages, ensure those backing pages
* are present and with valid content. The bo's resource is also
* placed on the correct LRU list if it was previously swapped
* out.
*
* Return: 0 if successful, negative error code on failure.
* Note: May return -EINTR or -ERESTARTSYS if @ctx::interruptible
* is set to true.
*/
int ttm_bo_populate(struct ttm_buffer_object *bo,
struct ttm_operation_ctx *ctx)
{
struct ttm_tt *tt = bo->ttm;
bool swapped;
int ret;
dma_resv_assert_held(bo->base.resv);
if (!tt)
return 0;
swapped = ttm_tt_is_swapped(tt);
ret = ttm_tt_populate(bo->bdev, tt, ctx);
if (ret)
return ret;
if (swapped && !ttm_tt_is_swapped(tt) && !bo->pin_count &&
bo->resource) {
spin_lock(&bo->bdev->lru_lock);
ttm_resource_add_bulk_move(bo->resource, bo);
ttm_resource_move_to_lru_tail(bo->resource);
spin_unlock(&bo->bdev->lru_lock);
}
return 0;
}
EXPORT_SYMBOL(ttm_bo_populate);

View File

@ -163,7 +163,7 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
src_man = ttm_manager_type(bdev, src_mem->mem_type);
if (ttm && ((ttm->page_flags & TTM_TT_FLAG_SWAPPED) ||
dst_man->use_tt)) {
ret = ttm_tt_populate(bdev, ttm, ctx);
ret = ttm_bo_populate(bo, ctx);
if (ret)
return ret;
}
@ -350,7 +350,7 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
BUG_ON(!ttm);
ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
ret = ttm_bo_populate(bo, &ctx);
if (ret)
return ret;
@ -507,7 +507,7 @@ int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map)
pgprot_t prot;
void *vaddr;
ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
ret = ttm_bo_populate(bo, &ctx);
if (ret)
return ret;

View File

@ -224,7 +224,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
};
ttm = bo->ttm;
err = ttm_tt_populate(bdev, bo->ttm, &ctx);
err = ttm_bo_populate(bo, &ctx);
if (err) {
if (err == -EINTR || err == -ERESTARTSYS ||
err == -EAGAIN)

View File

@ -216,7 +216,7 @@ int ttm_device_init(struct ttm_device *bdev, const struct ttm_device_funcs *func
bdev->vma_manager = vma_manager;
spin_lock_init(&bdev->lru_lock);
INIT_LIST_HEAD(&bdev->pinned);
INIT_LIST_HEAD(&bdev->unevictable);
bdev->dev_mapping = mapping;
mutex_lock(&ttm_global_mutex);
list_add_tail(&bdev->device_list, &glob->device_list);
@ -283,7 +283,7 @@ void ttm_device_clear_dma_mappings(struct ttm_device *bdev)
struct ttm_resource_manager *man;
unsigned int i, j;
ttm_device_clear_lru_dma_mappings(bdev, &bdev->pinned);
ttm_device_clear_lru_dma_mappings(bdev, &bdev->unevictable);
for (i = TTM_PL_SYSTEM; i < TTM_NUM_MEM_TYPES; ++i) {
man = ttm_manager_type(bdev, i);

View File

@ -30,6 +30,7 @@
#include <drm/ttm/ttm_bo.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_resource.h>
#include <drm/ttm/ttm_tt.h>
#include <drm/drm_util.h>
@ -235,11 +236,26 @@ static void ttm_lru_bulk_move_del(struct ttm_lru_bulk_move *bulk,
}
}
static bool ttm_resource_is_swapped(struct ttm_resource *res, struct ttm_buffer_object *bo)
{
/*
* Take care when creating a new resource for a bo, that it is not considered
* swapped if it's not the current resource for the bo and is thus logically
* associated with the ttm_tt. Think a VRAM resource created to move a
* swapped-out bo to VRAM.
*/
if (bo->resource != res || !bo->ttm)
return false;
dma_resv_assert_held(bo->base.resv);
return ttm_tt_is_swapped(bo->ttm);
}
/* Add the resource to a bulk move if the BO is configured for it */
void ttm_resource_add_bulk_move(struct ttm_resource *res,
struct ttm_buffer_object *bo)
{
if (bo->bulk_move && !bo->pin_count)
if (bo->bulk_move && !bo->pin_count && !ttm_resource_is_swapped(res, bo))
ttm_lru_bulk_move_add(bo->bulk_move, res);
}
@ -247,7 +263,7 @@ void ttm_resource_add_bulk_move(struct ttm_resource *res,
void ttm_resource_del_bulk_move(struct ttm_resource *res,
struct ttm_buffer_object *bo)
{
if (bo->bulk_move && !bo->pin_count)
if (bo->bulk_move && !bo->pin_count && !ttm_resource_is_swapped(res, bo))
ttm_lru_bulk_move_del(bo->bulk_move, res);
}
@ -259,8 +275,8 @@ void ttm_resource_move_to_lru_tail(struct ttm_resource *res)
lockdep_assert_held(&bo->bdev->lru_lock);
if (bo->pin_count) {
list_move_tail(&res->lru.link, &bdev->pinned);
if (bo->pin_count || ttm_resource_is_swapped(res, bo)) {
list_move_tail(&res->lru.link, &bdev->unevictable);
} else if (bo->bulk_move) {
struct ttm_lru_bulk_move_pos *pos =
@ -301,8 +317,8 @@ void ttm_resource_init(struct ttm_buffer_object *bo,
man = ttm_manager_type(bo->bdev, place->mem_type);
spin_lock(&bo->bdev->lru_lock);
if (bo->pin_count)
list_add_tail(&res->lru.link, &bo->bdev->pinned);
if (bo->pin_count || ttm_resource_is_swapped(res, bo))
list_add_tail(&res->lru.link, &bo->bdev->unevictable);
else
list_add_tail(&res->lru.link, &man->lru[bo->priority]);
man->usage += res->size;

View File

@ -367,7 +367,10 @@ error:
}
return ret;
}
#if IS_ENABLED(CONFIG_DRM_TTM_KUNIT_TEST)
EXPORT_SYMBOL(ttm_tt_populate);
#endif
void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm)
{

View File

@ -238,6 +238,7 @@ const struct drm_driver vc5_drm_driver = {
#endif
DRM_GEM_DMA_DRIVER_OPS_WITH_DUMB_CREATE(vc5_dumb_create),
DRM_FBDEV_DMA_DRIVER_OPS,
.fops = &vc4_drm_fops,

View File

@ -224,8 +224,8 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
if (!drm_dev_enter(drm, &idx))
return;
if (hvs->vc4->gen == VC4_GEN_4)
return;
if (hvs->vc4->gen != VC4_GEN_4)
goto exit;
/* The LUT memory is laid out with each HVS channel in order,
* each of which takes 256 writes for R, 256 for G, then 256
@ -242,6 +242,7 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
for (i = 0; i < crtc->gamma_size; i++)
HVS_WRITE(SCALER_GAMDATA, vc4_crtc->lut_b[i]);
exit:
drm_dev_exit(idx);
}
@ -602,7 +603,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
}
if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)
return;
goto exit;
if (debug_dump_regs) {
DRM_INFO("CRTC %d HVS before:\n", drm_crtc_index(crtc));
@ -685,6 +686,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
vc4_hvs_dump_state(hvs);
}
exit:
drm_dev_exit(idx);
}

View File

@ -236,11 +236,7 @@ int vc4_perfmon_get_values_ioctl(struct drm_device *dev, void *data,
return -ENODEV;
}
mutex_lock(&vc4file->perfmon.lock);
perfmon = idr_find(&vc4file->perfmon.idr, req->id);
vc4_perfmon_get(perfmon);
mutex_unlock(&vc4file->perfmon.lock);
perfmon = vc4_perfmon_find(vc4file, req->id);
if (!perfmon)
return -EINVAL;

View File

@ -901,7 +901,7 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
}
}
ret = ttm_tt_populate(bo->ttm.bdev, bo->ttm.ttm, &ctx);
ret = ttm_bo_populate(&bo->ttm, &ctx);
if (ret)
goto err_res_free;
@ -954,7 +954,7 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
if (ret)
return ret;
ret = ttm_tt_populate(bo->ttm.bdev, bo->ttm.ttm, &ctx);
ret = ttm_bo_populate(&bo->ttm, &ctx);
if (ret)
goto err_res_free;

View File

@ -152,7 +152,7 @@ static void init_backlight(struct atmel_lcdfb_info *sinfo)
}
sinfo->backlight = bl;
bl->props.power = FB_BLANK_UNBLANK;
bl->props.power = BACKLIGHT_POWER_ON;
bl->props.brightness = atmel_bl_get_brightness(bl);
}
@ -162,7 +162,7 @@ static void exit_backlight(struct atmel_lcdfb_info *sinfo)
return;
if (sinfo->backlight->ops) {
sinfo->backlight->props.power = FB_BLANK_POWERDOWN;
sinfo->backlight->props.power = BACKLIGHT_POWER_OFF;
sinfo->backlight->ops->update_status(sinfo->backlight);
}
backlight_device_unregister(sinfo->backlight);

View File

@ -1299,11 +1299,11 @@ static void aty128_set_lcd_enable(struct aty128fb_par *par, int on)
reg &= ~LVDS_DISPLAY_DIS;
aty_st_le32(LVDS_GEN_CNTL, reg);
#ifdef CONFIG_FB_ATY128_BACKLIGHT
aty128_bl_set_power(info, FB_BLANK_UNBLANK);
aty128_bl_set_power(info, BACKLIGHT_POWER_ON);
#endif
} else {
#ifdef CONFIG_FB_ATY128_BACKLIGHT
aty128_bl_set_power(info, FB_BLANK_POWERDOWN);
aty128_bl_set_power(info, BACKLIGHT_POWER_OFF);
#endif
reg = aty_ld_le32(LVDS_GEN_CNTL);
reg |= LVDS_DISPLAY_DIS;
@ -1858,7 +1858,7 @@ static void aty128_bl_init(struct aty128fb_par *par)
219 * FB_BACKLIGHT_MAX / MAX_LEVEL);
bd->props.brightness = bd->props.max_brightness;
bd->props.power = FB_BLANK_UNBLANK;
bd->props.power = BACKLIGHT_POWER_ON;
backlight_update_status(bd);
printk("aty128: Backlight initialized (%s)\n", name);

View File

@ -2272,7 +2272,7 @@ static void aty_bl_init(struct atyfb_par *par)
0xFF * FB_BACKLIGHT_MAX / MAX_LEVEL);
bd->props.brightness = bd->props.max_brightness;
bd->props.power = FB_BLANK_UNBLANK;
bd->props.power = BACKLIGHT_POWER_ON;
backlight_update_status(bd);
printk("aty: Backlight initialized (%s)\n", name);

View File

@ -179,7 +179,7 @@ void radeonfb_bl_init(struct radeonfb_info *rinfo)
217 * FB_BACKLIGHT_MAX / MAX_RADEON_LEVEL);
bd->props.brightness = bd->props.max_brightness;
bd->props.power = FB_BLANK_UNBLANK;
bd->props.power = BACKLIGHT_POWER_ON;
backlight_update_status(bd);
printk("radeonfb: Backlight initialized (%s)\n", name);

View File

@ -399,7 +399,7 @@ static int chipsfb_pci_init(struct pci_dev *dp, const struct pci_device_id *ent)
/* turn on the backlight */
mutex_lock(&pmac_backlight_mutex);
if (pmac_backlight) {
pmac_backlight->props.power = FB_BLANK_UNBLANK;
pmac_backlight->props.power = BACKLIGHT_POWER_ON;
backlight_update_status(pmac_backlight);
}
mutex_unlock(&pmac_backlight_mutex);

View File

@ -112,7 +112,7 @@ void nvidia_bl_init(struct nvidia_par *par)
0x534 * FB_BACKLIGHT_MAX / MAX_LEVEL);
bd->props.brightness = bd->props.max_brightness;
bd->props.power = FB_BLANK_UNBLANK;
bd->props.power = BACKLIGHT_POWER_ON;
backlight_update_status(bd);
printk("nvidia: Backlight initialized (%s)\n", name);

View File

@ -1215,7 +1215,7 @@ static int dsicm_probe(struct platform_device *pdev)
ddata->bldev = bldev;
bldev->props.power = FB_BLANK_UNBLANK;
bldev->props.power = BACKLIGHT_POWER_ON;
bldev->props.brightness = 255;
dsicm_bl_update_status(bldev);
@ -1253,7 +1253,7 @@ static void dsicm_remove(struct platform_device *pdev)
bldev = ddata->bldev;
if (bldev != NULL) {
bldev->props.power = FB_BLANK_POWERDOWN;
bldev->props.power = BACKLIGHT_POWER_OFF;
dsicm_bl_update_status(bldev);
backlight_device_unregister(bldev);
}

View File

@ -754,7 +754,7 @@ static int acx565akm_probe(struct spi_device *spi)
}
memset(&props, 0, sizeof(props));
props.power = FB_BLANK_UNBLANK;
props.power = BACKLIGHT_POWER_ON;
props.type = BACKLIGHT_RAW;
bldev = backlight_device_register("acx565akm", &ddata->spi->dev,

View File

@ -347,7 +347,7 @@ static void riva_bl_init(struct riva_par *par)
FB_BACKLIGHT_MAX);
bd->props.brightness = bd->props.max_brightness;
bd->props.power = FB_BLANK_UNBLANK;
bd->props.power = BACKLIGHT_POWER_ON;
backlight_update_status(bd);
printk("riva: Backlight initialized (%s)\n", name);

View File

@ -1049,7 +1049,7 @@ static int sh_mobile_lcdc_start(struct sh_mobile_lcdc_priv *priv)
sh_mobile_lcdc_display_on(ch);
if (ch->bl) {
ch->bl->props.power = FB_BLANK_UNBLANK;
ch->bl->props.power = BACKLIGHT_POWER_ON;
backlight_update_status(ch->bl);
}
}
@ -1082,7 +1082,7 @@ static void sh_mobile_lcdc_stop(struct sh_mobile_lcdc_priv *priv)
}
if (ch->bl) {
ch->bl->props.power = FB_BLANK_POWERDOWN;
ch->bl->props.power = BACKLIGHT_POWER_OFF;
backlight_update_status(ch->bl);
}
@ -2125,7 +2125,7 @@ static int sh_mobile_lcdc_update_bl(struct backlight_device *bdev)
struct sh_mobile_lcdc_chan *ch = bl_get_data(bdev);
int brightness = bdev->props.brightness;
if (bdev->props.power != FB_BLANK_UNBLANK ||
if (bdev->props.power != BACKLIGHT_POWER_ON ||
bdev->props.state & (BL_CORE_SUSPENDED | BL_CORE_FBBLANK))
brightness = 0;

View File

@ -6,6 +6,10 @@
#ifndef DRM_IMX_BRIDGE_H
#define DRM_IMX_BRIDGE_H
struct device;
struct device_node;
struct drm_bridge;
struct drm_bridge *devm_imx_drm_legacy_bridge(struct device *dev,
struct device_node *np,
int type);

View File

@ -369,7 +369,7 @@ struct drm_driver {
uint64_t *offset);
/**
* @fbdev_probe
* @fbdev_probe:
*
* Allocates and initialize the fb_info structure for fbdev emulation.
* Furthermore it also needs to allocate the DRM framebuffer used to

View File

@ -388,6 +388,18 @@ struct drm_file {
* Per-file buffer caches used by the PRIME buffer sharing code.
*/
struct drm_prime_file_private prime;
/**
* @client_name:
*
* Userspace-provided name; useful for accounting and debugging.
*/
const char *client_name;
/**
* @client_name_lock: Protects @client_name.
*/
struct mutex client_name_lock;
};
/**

View File

@ -280,6 +280,8 @@ void mipi_dsi_compression_mode_ext_multi(struct mipi_dsi_multi_context *ctx,
bool enable,
enum mipi_dsi_compression_algo algo,
unsigned int pps_selector);
void mipi_dsi_compression_mode_multi(struct mipi_dsi_multi_context *ctx,
bool enable);
void mipi_dsi_picture_parameter_set_multi(struct mipi_dsi_multi_context *ctx,
const struct drm_dsc_picture_parameter_set *pps);

View File

@ -462,5 +462,7 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo);
pgprot_t ttm_io_prot(struct ttm_buffer_object *bo, struct ttm_resource *res,
pgprot_t tmp);
void ttm_bo_tt_destroy(struct ttm_buffer_object *bo);
int ttm_bo_populate(struct ttm_buffer_object *bo,
struct ttm_operation_ctx *ctx);
#endif

View File

@ -252,9 +252,10 @@ struct ttm_device {
spinlock_t lru_lock;
/**
* @pinned: Buffer objects which are pinned and so not on any LRU list.
* @unevictable Buffer objects which are pinned or swapped and as such
* not on an LRU list.
*/
struct list_head pinned;
struct list_head unevictable;
/**
* @dev_mapping: A pointer to the struct address_space for invalidating

View File

@ -129,6 +129,11 @@ static inline bool ttm_tt_is_populated(struct ttm_tt *tt)
return tt->page_flags & TTM_TT_FLAG_PRIV_POPULATED;
}
static inline bool ttm_tt_is_swapped(const struct ttm_tt *tt)
{
return tt->page_flags & TTM_TT_FLAG_SWAPPED;
}
/**
* ttm_tt_create
*

View File

@ -1024,6 +1024,13 @@ struct drm_crtc_queue_sequence {
__u64 user_data; /* user data passed to event */
};
#define DRM_CLIENT_NAME_MAX_LEN 64
struct drm_set_client_name {
__u64 name_len;
__u64 name;
};
#if defined(__cplusplus)
}
#endif
@ -1288,6 +1295,16 @@ extern "C" {
*/
#define DRM_IOCTL_MODE_CLOSEFB DRM_IOWR(0xD0, struct drm_mode_closefb)
/**
* DRM_IOCTL_SET_CLIENT_NAME - Attach a name to a drm_file
*
* Having a name allows for easier tracking and debugging.
* The length of the name (without null ending char) must be
* <= DRM_CLIENT_NAME_MAX_LEN.
* The call will fail if the name contains whitespaces or non-printable chars.
*/
#define DRM_IOCTL_SET_CLIENT_NAME DRM_IOWR(0xD1, struct drm_set_client_name)
/*
* Device specific ioctls should only be in their respective headers
* The device specific ioctl range is from 0x40 to 0x9f.