2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-21 19:53:59 +08:00

Merge branch 'drm-docs' of ssh://people.freedesktop.org/~danvet/drm into drm-next

Here's my drm documentation update and driver api polish pull request.
Alex reviewed the entire pile, I've applied a little bit of spelling
polish in a few places since then and otherwise the Usual Suspects (David,
Rob, ...) don't seem up to have another look at it (I've poked them on
irc). So I think it's as good as it gets ;-)

Note that I've dropped the final imx breaker patch since that's blocked on
imx getting sane. Once that's landed I'll ping you to pick up that
straggler.

* 'drm-docs' of ssh://people.freedesktop.org/~danvet/drm: (34 commits)
  drm/imx: remove drm_mode_connector_detach_encoder harder
  drm: kerneldoc polish for drm_crtc.c
  drm: kerneldoc polish for drm_crtc_helper.c
  drm: drop error code for drm_helper_resume_force_mode
  drm/crtc-helper: remove LOCKING from kerneldoc
  drm: remove return value from drm_helper_mode_fill_fb_struct
  drm/doc: Fix misplaced </para>
  drm: remove drm_display_mode->private_size
  drm: polish function kerneldoc for drm_modes.[hc]
  drm/modes: drop maxPitch from drm_mode_validate_size
  drm/modes: drop return value from drm_display_mode_from_videomode
  drm/modes: remove drm_mode_height/width
  drm: extract drm_modes.h for drm_crtc.h functions
  drm: move drm_mode related functions into drm_modes.c
  drm/doc: Repleace LOCKING kerneldoc sections in drm_modes.c
  drm/doc: Integrate drm_modes.c kerneldoc
  drm/kms: rip out drm_mode_connector_detach_encoder
  drm/doc: Add function reference documentation for drm_mm.c
  drm/doc: Overview documentation for drm_mm.c
  drm/mm: Remove MM_UNUSED_TARGET
  ...
This commit is contained in:
Dave Airlie 2014-03-18 19:09:10 +10:00
commit 978c605016
20 changed files with 2012 additions and 797 deletions

View File

@ -29,12 +29,26 @@
</address>
</affiliation>
</author>
<author>
<firstname>Daniel</firstname>
<surname>Vetter</surname>
<contrib>Contributions all over the place</contrib>
<affiliation>
<orgname>Intel Corporation</orgname>
<address>
<email>daniel.vetter@ffwll.ch</email>
</address>
</affiliation>
</author>
</authorgroup>
<copyright>
<year>2008-2009</year>
<year>2012</year>
<year>2013-2014</year>
<holder>Intel Corporation</holder>
</copyright>
<copyright>
<year>2012</year>
<holder>Laurent Pinchart</holder>
</copyright>
@ -60,7 +74,15 @@
<toc></toc>
<!-- Introduction -->
<part id="drmCore">
<title>DRM Core</title>
<partintro>
<para>
This first part of the DRM Developer's Guide documents core DRM code,
helper libraries for writting drivers and generic userspace interfaces
exposed by DRM drivers.
</para>
</partintro>
<chapter id="drmIntroduction">
<title>Introduction</title>
@ -264,8 +286,8 @@ char *date;</synopsis>
<para>
The <methodname>load</methodname> method is the driver and device
initialization entry point. The method is responsible for allocating and
initializing driver private data, specifying supported performance
counters, performing resource allocation and mapping (e.g. acquiring
initializing driver private data, performing resource allocation and
mapping (e.g. acquiring
clocks, mapping registers or allocating command buffers), initializing
the memory manager (<xref linkend="drm-memory-management"/>), installing
the IRQ handler (<xref linkend="drm-irq-registration"/>), setting up
@ -295,7 +317,7 @@ char *date;</synopsis>
their <methodname>load</methodname> method called with flags to 0.
</para>
<sect3>
<title>Driver Private &amp; Performance Counters</title>
<title>Driver Private Data</title>
<para>
The driver private hangs off the main
<structname>drm_device</structname> structure and can be used for
@ -307,14 +329,6 @@ char *date;</synopsis>
<structname>drm_device</structname>.<structfield>dev_priv</structfield>
set to NULL when the driver is unloaded.
</para>
<para>
DRM supports several counters which were used for rough performance
characterization. This stat counter system is deprecated and should not
be used. If performance monitoring is desired, the developer should
investigate and potentially enhance the kernel perf and tracing
infrastructure to export GPU related performance information for
consumption by performance monitoring tools and applications.
</para>
</sect3>
<sect3 id="drm-irq-registration">
<title>IRQ Registration</title>
@ -697,55 +711,16 @@ char *date;</synopsis>
respectively. The conversion is handled by the DRM core without any
driver-specific support.
</para>
<para>
Similar to global names, GEM file descriptors are also used to share GEM
objects across processes. They offer additional security: as file
descriptors must be explicitly sent over UNIX domain sockets to be shared
between applications, they can't be guessed like the globally unique GEM
names.
</para>
<para>
Drivers that support GEM file descriptors, also known as the DRM PRIME
API, must set the DRIVER_PRIME bit in the struct
<structname>drm_driver</structname>
<structfield>driver_features</structfield> field, and implement the
<methodname>prime_handle_to_fd</methodname> and
<methodname>prime_fd_to_handle</methodname> operations.
</para>
<para>
<synopsis>int (*prime_handle_to_fd)(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle,
uint32_t flags, int *prime_fd);
int (*prime_fd_to_handle)(struct drm_device *dev,
struct drm_file *file_priv, int prime_fd,
uint32_t *handle);</synopsis>
Those two operations convert a handle to a PRIME file descriptor and
vice versa. Drivers must use the kernel dma-buf buffer sharing framework
to manage the PRIME file descriptors.
</para>
<para>
While non-GEM drivers must implement the operations themselves, GEM
drivers must use the <function>drm_gem_prime_handle_to_fd</function>
and <function>drm_gem_prime_fd_to_handle</function> helper functions.
Those helpers rely on the driver
<methodname>gem_prime_export</methodname> and
<methodname>gem_prime_import</methodname> operations to create a dma-buf
instance from a GEM object (dma-buf exporter role) and to create a GEM
object from a dma-buf instance (dma-buf importer role).
</para>
<para>
<synopsis>struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
struct drm_gem_object *obj,
int flags);
struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
struct dma_buf *dma_buf);</synopsis>
These two operations are mandatory for GEM drivers that support DRM
PRIME.
</para>
<sect4>
<title>DRM PRIME Helper Functions Reference</title>
!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
</sect4>
<para>
GEM also supports buffer sharing with dma-buf file descriptors through
PRIME. GEM-based drivers must use the provided helpers functions to
implement the exporting and importing correctly. See <xref linkend="drm-prime-support" />.
Since sharing file descriptors is inherently more secure than the
easily guessable and global GEM names it is the preferred buffer
sharing mechanism. Sharing buffers through GEM names is only supported
for legacy userspace. Furthermore PRIME also allows cross-device
buffer sharing since it is based on dma-bufs.
</para>
</sect3>
<sect3 id="drm-gem-objects-mapping">
<title>GEM Objects Mapping</title>
@ -829,62 +804,6 @@ char *date;</synopsis>
faults can implement their own mmap file operation handler.
</para>
</sect3>
<sect3>
<title>Dumb GEM Objects</title>
<para>
The GEM API doesn't standardize GEM objects creation and leaves it to
driver-specific ioctls. While not an issue for full-fledged graphics
stacks that include device-specific userspace components (in libdrm for
instance), this limit makes DRM-based early boot graphics unnecessarily
complex.
</para>
<para>
Dumb GEM objects partly alleviate the problem by providing a standard
API to create dumb buffers suitable for scanout, which can then be used
to create KMS frame buffers.
</para>
<para>
To support dumb GEM objects drivers must implement the
<methodname>dumb_create</methodname>,
<methodname>dumb_destroy</methodname> and
<methodname>dumb_map_offset</methodname> operations.
</para>
<itemizedlist>
<listitem>
<synopsis>int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev,
struct drm_mode_create_dumb *args);</synopsis>
<para>
The <methodname>dumb_create</methodname> operation creates a GEM
object suitable for scanout based on the width, height and depth
from the struct <structname>drm_mode_create_dumb</structname>
argument. It fills the argument's <structfield>handle</structfield>,
<structfield>pitch</structfield> and <structfield>size</structfield>
fields with a handle for the newly created GEM object and its line
pitch and size in bytes.
</para>
</listitem>
<listitem>
<synopsis>int (*dumb_destroy)(struct drm_file *file_priv, struct drm_device *dev,
uint32_t handle);</synopsis>
<para>
The <methodname>dumb_destroy</methodname> operation destroys a dumb
GEM object created by <methodname>dumb_create</methodname>.
</para>
</listitem>
<listitem>
<synopsis>int (*dumb_map_offset)(struct drm_file *file_priv, struct drm_device *dev,
uint32_t handle, uint64_t *offset);</synopsis>
<para>
The <methodname>dumb_map_offset</methodname> operation associates an
mmap fake offset with the GEM object given by the handle and returns
it. Drivers must use the
<function>drm_gem_create_mmap_offset</function> function to
associate the fake offset as described in
<xref linkend="drm-gem-objects-mapping"/>.
</para>
</listitem>
</itemizedlist>
</sect3>
<sect3>
<title>Memory Coherency</title>
<para>
@ -924,7 +843,99 @@ char *date;</synopsis>
abstracted from the client in libdrm.
</para>
</sect3>
</sect2>
<sect3>
<title>GEM Function Reference</title>
!Edrivers/gpu/drm/drm_gem.c
</sect3>
</sect2>
<sect2>
<title>VMA Offset Manager</title>
!Pdrivers/gpu/drm/drm_vma_manager.c vma offset manager
!Edrivers/gpu/drm/drm_vma_manager.c
!Iinclude/drm/drm_vma_manager.h
</sect2>
<sect2 id="drm-prime-support">
<title>PRIME Buffer Sharing</title>
<para>
PRIME is the cross device buffer sharing framework in drm, originally
created for the OPTIMUS range of multi-gpu platforms. To userspace
PRIME buffers are dma-buf based file descriptors.
</para>
<sect3>
<title>Overview and Driver Interface</title>
<para>
Similar to GEM global names, PRIME file descriptors are
also used to share buffer objects across processes. They offer
additional security: as file descriptors must be explicitly sent over
UNIX domain sockets to be shared between applications, they can't be
guessed like the globally unique GEM names.
</para>
<para>
Drivers that support the PRIME
API must set the DRIVER_PRIME bit in the struct
<structname>drm_driver</structname>
<structfield>driver_features</structfield> field, and implement the
<methodname>prime_handle_to_fd</methodname> and
<methodname>prime_fd_to_handle</methodname> operations.
</para>
<para>
<synopsis>int (*prime_handle_to_fd)(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle,
uint32_t flags, int *prime_fd);
int (*prime_fd_to_handle)(struct drm_device *dev,
struct drm_file *file_priv, int prime_fd,
uint32_t *handle);</synopsis>
Those two operations convert a handle to a PRIME file descriptor and
vice versa. Drivers must use the kernel dma-buf buffer sharing framework
to manage the PRIME file descriptors. Similar to the mode setting
API PRIME is agnostic to the underlying buffer object manager, as
long as handles are 32bit unsinged integers.
</para>
<para>
While non-GEM drivers must implement the operations themselves, GEM
drivers must use the <function>drm_gem_prime_handle_to_fd</function>
and <function>drm_gem_prime_fd_to_handle</function> helper functions.
Those helpers rely on the driver
<methodname>gem_prime_export</methodname> and
<methodname>gem_prime_import</methodname> operations to create a dma-buf
instance from a GEM object (dma-buf exporter role) and to create a GEM
object from a dma-buf instance (dma-buf importer role).
</para>
<para>
<synopsis>struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
struct drm_gem_object *obj,
int flags);
struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
struct dma_buf *dma_buf);</synopsis>
These two operations are mandatory for GEM drivers that support
PRIME.
</para>
</sect3>
<sect3>
<title>PRIME Helper Functions</title>
!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
</sect3>
</sect2>
<sect2>
<title>PRIME Function References</title>
!Edrivers/gpu/drm/drm_prime.c
</sect2>
<sect2>
<title>DRM MM Range Allocator</title>
<sect3>
<title>Overview</title>
!Pdrivers/gpu/drm/drm_mm.c Overview
</sect3>
<sect3>
<title>LRU Scan/Eviction Support</title>
!Pdrivers/gpu/drm/drm_mm.c lru scan roaster
</sect3>
</sect2>
<sect2>
<title>DRM MM Range Allocator Function References</title>
!Edrivers/gpu/drm/drm_mm.c
!Iinclude/drm/drm_mm.h
</sect2>
</sect1>
<!-- Internals: mode setting -->
@ -952,6 +963,11 @@ int max_width, max_height;</synopsis>
<para>Mode setting functions.</para>
</listitem>
</itemizedlist>
<sect2>
<title>Display Modes Function Reference</title>
!Iinclude/drm/drm_modes.h
!Edrivers/gpu/drm/drm_modes.c
</sect2>
<sect2>
<title>Frame Buffer Creation</title>
<synopsis>struct drm_framebuffer *(*fb_create)(struct drm_device *dev,
@ -968,9 +984,11 @@ int max_width, max_height;</synopsis>
Frame buffers rely on the underneath memory manager for low-level memory
operations. When creating a frame buffer applications pass a memory
handle (or a list of memory handles for multi-planar formats) through
the <parameter>drm_mode_fb_cmd2</parameter> argument. This document
assumes that the driver uses GEM, those handles thus reference GEM
objects.
the <parameter>drm_mode_fb_cmd2</parameter> argument. For drivers using
GEM as their userspace buffer management interface this would be a GEM
handle. Drivers are however free to use their own backing storage object
handles, e.g. vmwgfx directly exposes special TTM handles to userspace
and so expects TTM handles in the create ioctl and not GEM handles.
</para>
<para>
Drivers must first validate the requested frame buffer parameters passed
@ -992,7 +1010,7 @@ int max_width, max_height;</synopsis>
</para>
<para>
The initailization of the new framebuffer instance is finalized with a
The initialization of the new framebuffer instance is finalized with a
call to <function>drm_framebuffer_init</function> which takes a pointer
to DRM frame buffer operations (struct
<structname>drm_framebuffer_funcs</structname>). Note that this function
@ -1042,7 +1060,7 @@ int max_width, max_height;</synopsis>
<para>
The lifetime of a drm framebuffer is controlled with a reference count,
drivers can grab additional references with
<function>drm_framebuffer_reference</function> </para> and drop them
<function>drm_framebuffer_reference</function>and drop them
again with <function>drm_framebuffer_unreference</function>. For
driver-private framebuffers for which the last reference is never
dropped (e.g. for the fbdev framebuffer when the struct
@ -1050,6 +1068,72 @@ int max_width, max_height;</synopsis>
helper struct) drivers can manually clean up a framebuffer at module
unload time with
<function>drm_framebuffer_unregister_private</function>.
</para>
</sect2>
<sect2>
<title>Dumb Buffer Objects</title>
<para>
The KMS API doesn't standardize backing storage object creation and
leaves it to driver-specific ioctls. Furthermore actually creating a
buffer object even for GEM-based drivers is done through a
driver-specific ioctl - GEM only has a common userspace interface for
sharing and destroying objects. While not an issue for full-fledged
graphics stacks that include device-specific userspace components (in
libdrm for instance), this limit makes DRM-based early boot graphics
unnecessarily complex.
</para>
<para>
Dumb objects partly alleviate the problem by providing a standard
API to create dumb buffers suitable for scanout, which can then be used
to create KMS frame buffers.
</para>
<para>
To support dumb objects drivers must implement the
<methodname>dumb_create</methodname>,
<methodname>dumb_destroy</methodname> and
<methodname>dumb_map_offset</methodname> operations.
</para>
<itemizedlist>
<listitem>
<synopsis>int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev,
struct drm_mode_create_dumb *args);</synopsis>
<para>
The <methodname>dumb_create</methodname> operation creates a driver
object (GEM or TTM handle) suitable for scanout based on the
width, height and depth from the struct
<structname>drm_mode_create_dumb</structname> argument. It fills the
argument's <structfield>handle</structfield>,
<structfield>pitch</structfield> and <structfield>size</structfield>
fields with a handle for the newly created object and its line
pitch and size in bytes.
</para>
</listitem>
<listitem>
<synopsis>int (*dumb_destroy)(struct drm_file *file_priv, struct drm_device *dev,
uint32_t handle);</synopsis>
<para>
The <methodname>dumb_destroy</methodname> operation destroys a dumb
object created by <methodname>dumb_create</methodname>.
</para>
</listitem>
<listitem>
<synopsis>int (*dumb_map_offset)(struct drm_file *file_priv, struct drm_device *dev,
uint32_t handle, uint64_t *offset);</synopsis>
<para>
The <methodname>dumb_map_offset</methodname> operation associates an
mmap fake offset with the object given by the handle and returns
it. Drivers must use the
<function>drm_gem_create_mmap_offset</function> function to
associate the fake offset as described in
<xref linkend="drm-gem-objects-mapping"/>.
</para>
</listitem>
</itemizedlist>
<para>
Note that dumb objects may not be used for gpu acceleration, as has been
attempted on some ARM embedded platforms. Such drivers really must have
a hardware-specific ioctl to allocate suitable buffer objects.
</para>
</sect2>
<sect2>
<title>Output Polling</title>
@ -1130,8 +1214,11 @@ int max_width, max_height;</synopsis>
This operation is called with the mode config lock held.
</para>
<note><para>
FIXME: How should set_config interact with DPMS? If the CRTC is
suspended, should it be resumed?
Note that the drm core has no notion of restoring the mode setting
state after resume, since all resume handling is in the full
responsibility of the driver. The common mode setting helper library
though provides a helper which can be used for this:
<function>drm_helper_resume_force_mode</function>.
</para></note>
</sect4>
<sect4>
@ -2134,7 +2221,7 @@ void intel_crt_init(struct drm_device *dev)
set the <structfield>display_info</structfield>
<structfield>width_mm</structfield> and
<structfield>height_mm</structfield> fields if they haven't been set
already (for instance at initilization time when a fixed-size panel is
already (for instance at initialization time when a fixed-size panel is
attached to the connector). The mode <structfield>width_mm</structfield>
and <structfield>height_mm</structfield> fields are only used internally
during EDID parsing and should not be set when creating modes manually.
@ -2196,10 +2283,15 @@ void intel_crt_init(struct drm_device *dev)
!Edrivers/gpu/drm/drm_flip_work.c
</sect2>
<sect2>
<title>VMA Offset Manager</title>
!Pdrivers/gpu/drm/drm_vma_manager.c vma offset manager
!Edrivers/gpu/drm/drm_vma_manager.c
!Iinclude/drm/drm_vma_manager.h
<title>HDMI Infoframes Helper Reference</title>
<para>
Strictly speaking this is not a DRM helper library but generally useable
by any driver interfacing with HDMI outputs like v4l or alsa drivers.
But it nicely fits into the overall topic of mode setting helper
libraries and hence is also included here.
</para>
!Iinclude/linux/hdmi.h
!Edrivers/video/hdmi.c
</sect2>
</sect1>
@ -2561,42 +2653,44 @@ int num_ioctls;</synopsis>
</para>
</sect2>
</sect1>
<sect1>
<title>Command submission &amp; fencing</title>
<title>Legacy Support Code</title>
<para>
This should cover a few device-specific command submission
implementations.
The section very brievely covers some of the old legacy support code which
is only used by old DRM drivers which have done a so-called shadow-attach
to the underlying device instead of registering as a real driver. This
also includes some of the old generic buffer mangement and command
submission code. Do not use any of this in new and modern drivers.
</para>
</sect1>
<!-- Internals: suspend/resume -->
<sect2>
<title>Legacy Suspend/Resume</title>
<para>
The DRM core provides some suspend/resume code, but drivers wanting full
suspend/resume support should provide save() and restore() functions.
These are called at suspend, hibernate, or resume time, and should perform
any state save or restore required by your device across suspend or
hibernate states.
</para>
<synopsis>int (*suspend) (struct drm_device *, pm_message_t state);
int (*resume) (struct drm_device *);</synopsis>
<para>
Those are legacy suspend and resume methods which
<emphasis>only</emphasis> work with the legacy shadow-attach driver
registration functions. New driver should use the power management
interface provided by their bus type (usually through
the struct <structname>device_driver</structname> dev_pm_ops) and set
these methods to NULL.
</para>
</sect2>
<sect1>
<title>Suspend/Resume</title>
<para>
The DRM core provides some suspend/resume code, but drivers wanting full
suspend/resume support should provide save() and restore() functions.
These are called at suspend, hibernate, or resume time, and should perform
any state save or restore required by your device across suspend or
hibernate states.
</para>
<synopsis>int (*suspend) (struct drm_device *, pm_message_t state);
int (*resume) (struct drm_device *);</synopsis>
<para>
Those are legacy suspend and resume methods. New driver should use the
power management interface provided by their bus type (usually through
the struct <structname>device_driver</structname> dev_pm_ops) and set
these methods to NULL.
</para>
</sect1>
<sect1>
<title>DMA services</title>
<para>
This should cover how DMA mapping etc. is supported by the core.
These functions are deprecated and should not be used.
</para>
<sect2>
<title>Legacy DMA Services</title>
<para>
This should cover how DMA mapping etc. is supported by the core.
These functions are deprecated and should not be used.
</para>
</sect2>
</sect1>
</chapter>
@ -2658,8 +2752,8 @@ int (*resume) (struct drm_device *);</synopsis>
DRM core provides multiple character-devices for user-space to use.
Depending on which device is opened, user-space can perform a different
set of operations (mainly ioctls). The primary node is always created
and called <term>card&lt;num&gt;</term>. Additionally, a currently
unused control node, called <term>controlD&lt;num&gt;</term> is also
and called card&lt;num&gt;. Additionally, a currently
unused control node, called controlD&lt;num&gt; is also
created. The primary node provides all legacy operations and
historically was the only interface used by userspace. With KMS, the
control node was introduced. However, the planned KMS control interface
@ -2674,21 +2768,21 @@ int (*resume) (struct drm_device *);</synopsis>
nodes were introduced. Render nodes solely serve render clients, that
is, no modesetting or privileged ioctls can be issued on render nodes.
Only non-global rendering commands are allowed. If a driver supports
render nodes, it must advertise it via the <term>DRIVER_RENDER</term>
render nodes, it must advertise it via the DRIVER_RENDER
DRM driver capability. If not supported, the primary node must be used
for render clients together with the legacy drmAuth authentication
procedure.
</para>
<para>
If a driver advertises render node support, DRM core will create a
separate render node called <term>renderD&lt;num&gt;</term>. There will
separate render node called renderD&lt;num&gt;. There will
be one render node per device. No ioctls except PRIME-related ioctls
will be allowed on this node. Especially <term>GEM_OPEN</term> will be
will be allowed on this node. Especially GEM_OPEN will be
explicitly prohibited. Render nodes are designed to avoid the
buffer-leaks, which occur if clients guess the flink names or mmap
offsets on the legacy interface. Additionally to this basic interface,
drivers must mark their driver-dependent render-only ioctls as
<term>DRM_RENDER_ALLOW</term> so render clients can use them. Driver
DRM_RENDER_ALLOW so render clients can use them. Driver
authors must be careful not to allow any privileged ioctls on render
nodes.
</para>
@ -2749,15 +2843,73 @@ int (*resume) (struct drm_device *);</synopsis>
</sect1>
</chapter>
</part>
<part id="drmDrivers">
<title>DRM Drivers</title>
<!-- API reference -->
<appendix id="drmDriverApi">
<title>DRM Driver API</title>
<partintro>
<para>
Include auto-generated API reference here (need to reference it
from paragraphs above too).
This second part of the DRM Developer's Guide documents driver code,
implementation details and also all the driver-specific userspace
interfaces. Especially since all hardware-acceleration interfaces to
userspace are driver specific for efficiency and other reasons these
interfaces can be rather substantial. Hence every driver has its own
chapter.
</para>
</appendix>
</partintro>
<chapter id="drmI915">
<title>drm/i915 Intel GFX Driver</title>
<para>
The drm/i915 driver supports all (with the exception of some very early
models) integrated GFX chipsets with both Intel display and rendering
blocks. This excludes a set of SoC platforms with an SGX rendering unit,
those have basic support through the gma500 drm driver.
</para>
<sect1>
<title>Display Hardware Handling</title>
<para>
This section covers everything related to the display hardware including
the mode setting infrastructure, plane, sprite and cursor handling and
display, output probing and related topics.
</para>
<sect2>
<title>Mode Setting Infrastructure</title>
<para>
The i915 driver is thus far the only DRM driver which doesn't use the
common DRM helper code to implement mode setting sequences. Thus it
has its own tailor-made infrastructure for executing a display
configuration change.
</para>
</sect2>
<sect2>
<title>Plane Configuration</title>
<para>
This section covers plane configuration and composition with the
primary plane, sprites, cursors and overlays. This includes the
infrastructure to do atomic vsync'ed updates of all this state and
also tightly coupled topics like watermark setup and computation,
framebuffer compression and panel self refresh.
</para>
</sect2>
<sect2>
<title>Output Probing</title>
<para>
This section covers output probing and related infrastructure like the
hotplug interrupt storm detection and mitigation code. Note that the
i915 driver still uses most of the common DRM helper code for output
probing, so those sections fully apply.
</para>
</sect2>
</sect1>
<sect1>
<title>Memory Management and Command Submission</title>
<para>
This sections covers all things related to the GEM implementation in the
i915 driver.
</para>
</sect1>
</chapter>
</part>
</book>

File diff suppressed because it is too large Load Diff

View File

@ -105,9 +105,6 @@ static void drm_mode_validate_flag(struct drm_connector *connector,
* @maxX: max width for modes
* @maxY: max height for modes
*
* LOCKING:
* Caller must hold mode config lock.
*
* Based on the helper callbacks implemented by @connector try to detect all
* valid modes. Modes will first be added to the connector's probed_modes list,
* then culled (based on validity and the @maxX, @maxY parameters) and put into
@ -117,8 +114,8 @@ static void drm_mode_validate_flag(struct drm_connector *connector,
* @connector vfunc for drivers that use the crtc helpers for output mode
* filtering and detection.
*
* RETURNS:
* Number of modes found on @connector.
* Returns:
* The number of modes found on @connector.
*/
int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
uint32_t maxX, uint32_t maxY)
@ -131,6 +128,8 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
int mode_flags = 0;
bool verbose_prune = true;
WARN_ON(!mutex_is_locked(&dev->mode_config.mutex));
DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", connector->base.id,
drm_get_connector_name(connector));
/* set all modes to the unverified state */
@ -176,8 +175,7 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
drm_mode_connector_list_update(connector);
if (maxX && maxY)
drm_mode_validate_size(dev, &connector->modes, maxX,
maxY, 0);
drm_mode_validate_size(dev, &connector->modes, maxX, maxY);
if (connector->interlace_allowed)
mode_flags |= DRM_MODE_FLAG_INTERLACE;
@ -219,18 +217,19 @@ EXPORT_SYMBOL(drm_helper_probe_single_connector_modes);
* drm_helper_encoder_in_use - check if a given encoder is in use
* @encoder: encoder to check
*
* LOCKING:
* Caller must hold mode config lock.
* Checks whether @encoder is with the current mode setting output configuration
* in use by any connector. This doesn't mean that it is actually enabled since
* the DPMS state is tracked separately.
*
* Walk @encoders's DRM device's mode_config and see if it's in use.
*
* RETURNS:
* True if @encoder is part of the mode_config, false otherwise.
* Returns:
* True if @encoder is used, false otherwise.
*/
bool drm_helper_encoder_in_use(struct drm_encoder *encoder)
{
struct drm_connector *connector;
struct drm_device *dev = encoder->dev;
WARN_ON(!mutex_is_locked(&dev->mode_config.mutex));
list_for_each_entry(connector, &dev->mode_config.connector_list, head)
if (connector->encoder == encoder)
return true;
@ -242,19 +241,19 @@ EXPORT_SYMBOL(drm_helper_encoder_in_use);
* drm_helper_crtc_in_use - check if a given CRTC is in a mode_config
* @crtc: CRTC to check
*
* LOCKING:
* Caller must hold mode config lock.
* Checks whether @crtc is with the current mode setting output configuration
* in use by any connector. This doesn't mean that it is actually enabled since
* the DPMS state is tracked separately.
*
* Walk @crtc's DRM device's mode_config and see if it's in use.
*
* RETURNS:
* True if @crtc is part of the mode_config, false otherwise.
* Returns:
* True if @crtc is used, false otherwise.
*/
bool drm_helper_crtc_in_use(struct drm_crtc *crtc)
{
struct drm_encoder *encoder;
struct drm_device *dev = crtc->dev;
/* FIXME: Locking around list access? */
WARN_ON(!mutex_is_locked(&dev->mode_config.mutex));
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head)
if (encoder->crtc == crtc && drm_helper_encoder_in_use(encoder))
return true;
@ -283,11 +282,11 @@ drm_encoder_disable(struct drm_encoder *encoder)
* drm_helper_disable_unused_functions - disable unused objects
* @dev: DRM device
*
* LOCKING:
* Caller must hold mode config lock.
*
* If an connector or CRTC isn't part of @dev's mode_config, it can be disabled
* by calling its dpms function, which should power it off.
* This function walks through the entire mode setting configuration of @dev. It
* will remove any crtc links of unused encoders and encoder links of
* disconnected connectors. Then it will disable all unused encoders and crtcs
* either by calling their disable callback if available or by calling their
* dpms callback with DRM_MODE_DPMS_OFF.
*/
void drm_helper_disable_unused_functions(struct drm_device *dev)
{
@ -295,6 +294,8 @@ void drm_helper_disable_unused_functions(struct drm_device *dev)
struct drm_connector *connector;
struct drm_crtc *crtc;
drm_warn_on_modeset_not_all_locked(dev);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (!connector->encoder)
continue;
@ -355,9 +356,6 @@ drm_crtc_prepare_encoders(struct drm_device *dev)
* @y: vertical offset into the surface
* @old_fb: old framebuffer, for cleanup
*
* LOCKING:
* Caller must hold mode config lock.
*
* Try to set @mode on @crtc. Give @crtc and its associated connectors a chance
* to fixup or reject the mode prior to trying to set it. This is an internal
* helper that drivers could e.g. use to update properties that require the
@ -367,8 +365,8 @@ drm_crtc_prepare_encoders(struct drm_device *dev)
* drm_crtc_helper_set_config() helper function to drive the mode setting
* sequence.
*
* RETURNS:
* True if the mode was set successfully, or false otherwise.
* Returns:
* True if the mode was set successfully, false otherwise.
*/
bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
struct drm_display_mode *mode,
@ -384,6 +382,8 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
struct drm_encoder *encoder;
bool ret = true;
drm_warn_on_modeset_not_all_locked(dev);
saved_enabled = crtc->enabled;
crtc->enabled = drm_helper_crtc_in_use(crtc);
if (!crtc->enabled)
@ -560,17 +560,14 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
* drm_crtc_helper_set_config - set a new config from userspace
* @set: mode set configuration
*
* LOCKING:
* Caller must hold mode config lock.
*
* Setup a new configuration, provided by the upper layers (either an ioctl call
* from userspace or internally e.g. from the fbdev suppport code) in @set, and
* enable it. This is the main helper functions for drivers that implement
* kernel mode setting with the crtc helper functions and the assorted
* ->prepare(), ->modeset() and ->commit() helper callbacks.
*
* RETURNS:
* Returns 0 on success, -ERRNO on failure.
* Returns:
* Returns 0 on success, negative errno numbers on failure.
*/
int drm_crtc_helper_set_config(struct drm_mode_set *set)
{
@ -612,6 +609,8 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
dev = set->crtc->dev;
drm_warn_on_modeset_not_all_locked(dev);
/*
* Allocate space for the backup of all (non-pointer) encoder and
* connector data.
@ -924,8 +923,16 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode)
}
EXPORT_SYMBOL(drm_helper_connector_dpms);
int drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
struct drm_mode_fb_cmd2 *mode_cmd)
/**
* drm_helper_mode_fill_fb_struct - fill out framebuffer metadata
* @fb: drm_framebuffer object to fill out
* @mode_cmd: metadata from the userspace fb creation request
*
* This helper can be used in a drivers fb_create callback to pre-fill the fb's
* metadata fields.
*/
void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
struct drm_mode_fb_cmd2 *mode_cmd)
{
int i;
@ -938,17 +945,36 @@ int drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
drm_fb_get_bpp_depth(mode_cmd->pixel_format, &fb->depth,
&fb->bits_per_pixel);
fb->pixel_format = mode_cmd->pixel_format;
return 0;
}
EXPORT_SYMBOL(drm_helper_mode_fill_fb_struct);
int drm_helper_resume_force_mode(struct drm_device *dev)
/**
* drm_helper_resume_force_mode - force-restore mode setting configuration
* @dev: drm_device which should be restored
*
* Drivers which use the mode setting helpers can use this function to
* force-restore the mode setting configuration e.g. on resume or when something
* else might have trampled over the hw state (like some overzealous old BIOSen
* tended to do).
*
* This helper doesn't provide a error return value since restoring the old
* config should never fail due to resource allocation issues since the driver
* has successfully set the restored configuration already. Hence this should
* boil down to the equivalent of a few dpms on calls, which also don't provide
* an error code.
*
* Drivers where simply restoring an old configuration again might fail (e.g.
* due to slight differences in allocating shared resources when the
* configuration is restored in a different order than when userspace set it up)
* need to use their own restore logic.
*/
void drm_helper_resume_force_mode(struct drm_device *dev)
{
struct drm_crtc *crtc;
struct drm_encoder *encoder;
struct drm_crtc_helper_funcs *crtc_funcs;
int ret, encoder_dpms;
int encoder_dpms;
bool ret;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
@ -958,6 +984,7 @@ int drm_helper_resume_force_mode(struct drm_device *dev)
ret = drm_crtc_helper_set_mode(crtc, &crtc->mode,
crtc->x, crtc->y, crtc->fb);
/* Restoring the old config should never fail! */
if (ret == false)
DRM_ERROR("failed to set mode on crtc %p\n", crtc);
@ -980,12 +1007,28 @@ int drm_helper_resume_force_mode(struct drm_device *dev)
drm_helper_choose_crtc_dpms(crtc));
}
}
/* disable the unused connectors while restoring the modesetting */
drm_helper_disable_unused_functions(dev);
return 0;
}
EXPORT_SYMBOL(drm_helper_resume_force_mode);
/**
* drm_kms_helper_hotplug_event - fire off KMS hotplug events
* @dev: drm_device whose connector state changed
*
* This function fires off the uevent for userspace and also calls the
* output_poll_changed function, which is most commonly used to inform the fbdev
* emulation code and allow it to update the fbcon output configuration.
*
* Drivers should call this from their hotplug handling code when a change is
* detected. Note that this function does not do any output detection of its
* own, like drm_helper_hpd_irq_event() does - this is assumed to be done by the
* driver already.
*
* This function must be called from process context with no mode
* setting locks held.
*/
void drm_kms_helper_hotplug_event(struct drm_device *dev)
{
/* send a uevent + call fbdev */
@ -1054,6 +1097,16 @@ static void output_poll_execute(struct work_struct *work)
schedule_delayed_work(delayed_work, DRM_OUTPUT_POLL_PERIOD);
}
/**
* drm_kms_helper_poll_disable - disable output polling
* @dev: drm_device
*
* This function disables the output polling work.
*
* Drivers can call this helper from their device suspend implementation. It is
* not an error to call this even when output polling isn't enabled or arlready
* disabled.
*/
void drm_kms_helper_poll_disable(struct drm_device *dev)
{
if (!dev->mode_config.poll_enabled)
@ -1062,6 +1115,16 @@ void drm_kms_helper_poll_disable(struct drm_device *dev)
}
EXPORT_SYMBOL(drm_kms_helper_poll_disable);
/**
* drm_kms_helper_poll_enable - re-enable output polling.
* @dev: drm_device
*
* This function re-enables the output polling work.
*
* Drivers can call this helper from their device resume implementation. It is
* an error to call this when the output polling support has not yet been set
* up.
*/
void drm_kms_helper_poll_enable(struct drm_device *dev)
{
bool poll = false;
@ -1081,6 +1144,25 @@ void drm_kms_helper_poll_enable(struct drm_device *dev)
}
EXPORT_SYMBOL(drm_kms_helper_poll_enable);
/**
* drm_kms_helper_poll_init - initialize and enable output polling
* @dev: drm_device
*
* This function intializes and then also enables output polling support for
* @dev. Drivers which do not have reliable hotplug support in hardware can use
* this helper infrastructure to regularly poll such connectors for changes in
* their connection state.
*
* Drivers can control which connectors are polled by setting the
* DRM_CONNECTOR_POLL_CONNECT and DRM_CONNECTOR_POLL_DISCONNECT flags. On
* connectors where probing live outputs can result in visual distortion drivers
* should not set the DRM_CONNECTOR_POLL_DISCONNECT flag to avoid this.
* Connectors which have no flag or only DRM_CONNECTOR_POLL_HPD set are
* completely ignored by the polling logic.
*
* Note that a connector can be both polled and probed from the hotplug handler,
* in case the hotplug interrupt is known to be unreliable.
*/
void drm_kms_helper_poll_init(struct drm_device *dev)
{
INIT_DELAYED_WORK(&dev->mode_config.output_poll_work, output_poll_execute);
@ -1090,12 +1172,39 @@ void drm_kms_helper_poll_init(struct drm_device *dev)
}
EXPORT_SYMBOL(drm_kms_helper_poll_init);
/**
* drm_kms_helper_poll_fini - disable output polling and clean it up
* @dev: drm_device
*/
void drm_kms_helper_poll_fini(struct drm_device *dev)
{
drm_kms_helper_poll_disable(dev);
}
EXPORT_SYMBOL(drm_kms_helper_poll_fini);
/**
* drm_helper_hpd_irq_event - hotplug processing
* @dev: drm_device
*
* Drivers can use this helper function to run a detect cycle on all connectors
* which have the DRM_CONNECTOR_POLL_HPD flag set in their &polled member. All
* other connectors are ignored, which is useful to avoid reprobing fixed
* panels.
*
* This helper function is useful for drivers which can't or don't track hotplug
* interrupts for each connector.
*
* Drivers which support hotplug interrupts for each connector individually and
* which have a more fine-grained detect logic should bypass this code and
* directly call drm_kms_helper_hotplug_event() in case the connector state
* changed.
*
* This function must be called from process context with no mode
* setting locks held.
*
* Note that a connector can be both polled and probed from the hotplug handler,
* in case the hotplug interrupt is known to be unreliable.
*/
bool drm_helper_hpd_irq_event(struct drm_device *dev)
{
struct drm_connector *connector;

View File

@ -0,0 +1,38 @@
/*
* Copyright © 2006 Keith Packard
* Copyright © 2007-2008 Dave Airlie
* Copyright © 2007-2008 Intel Corporation
* Jesse Barnes <jesse.barnes@intel.com>
* Copyright © 2014 Intel Corporation
* Daniel Vetter <daniel.vetter@ffwll.ch>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/*
* This header file contains mode setting related functions and definitions
* which are only used within the drm module as internal implementation details
* and are not exported to drivers.
*/
int drm_mode_object_get(struct drm_device *dev,
struct drm_mode_object *obj, uint32_t obj_type);
void drm_mode_object_put(struct drm_device *dev,
struct drm_mode_object *object);

View File

@ -1098,10 +1098,14 @@ EXPORT_SYMBOL(drm_edid_is_valid);
/**
* Get EDID information via I2C.
*
* \param adapter : i2c device adaptor
* \param buf : EDID data buffer to be filled
* \param len : EDID data buffer length
* \return 0 on success or -1 on failure.
* @adapter : i2c device adaptor
* @buf: EDID data buffer to be filled
* @block: 128 byte EDID block to start fetching from
* @len: EDID data buffer length to fetch
*
* Returns:
*
* 0 on success or -1 on failure.
*
* Try to fetch EDID information by calling i2c driver function.
*/
@ -1243,9 +1247,11 @@ out:
/**
* Probe DDC presence.
* @adapter: i2c adapter to probe
*
* \param adapter : i2c device adaptor
* \return 1 on success
* Returns:
*
* 1 on success
*/
bool
drm_probe_ddc(struct i2c_adapter *adapter)
@ -1586,8 +1592,10 @@ bad_std_timing(u8 a, u8 b)
/**
* drm_mode_std - convert standard mode info (width, height, refresh) into mode
* @connector: connector of for the EDID block
* @edid: EDID block to scan
* @t: standard timing params
* @timing_level: standard timing level
* @revision: standard timing level
*
* Take the standard timing params (in this case width, aspect, and refresh)
* and convert them into a real mode using CVT/GTF/DMT.
@ -2132,6 +2140,7 @@ do_established_modes(struct detailed_timing *timing, void *c)
/**
* add_established_modes - get est. modes from EDID and add them
* @connector: connector of for the EDID block
* @edid: EDID block to scan
*
* Each EDID block contains a bitmap of the supported "established modes" list
@ -2194,6 +2203,7 @@ do_standard_modes(struct detailed_timing *timing, void *c)
/**
* add_standard_modes - get std. modes from EDID and add them
* @connector: connector of for the EDID block
* @edid: EDID block to scan
*
* Standard modes can be calculated using the appropriate standard (DMT,
@ -3300,6 +3310,7 @@ EXPORT_SYMBOL(drm_detect_hdmi_monitor);
/**
* drm_detect_monitor_audio - check monitor audio capability
* @edid: EDID block to scan
*
* Monitor should have CEA extension block.
* If monitor has 'basic audio', but no CEA audio blocks, it's 'basic
@ -3345,6 +3356,7 @@ EXPORT_SYMBOL(drm_detect_monitor_audio);
/**
* drm_rgb_quant_range_selectable - is RGB quantization range selectable?
* @edid: EDID block to scan
*
* Check whether the monitor reports the RGB quantization range selection
* as supported. The AVI infoframe can then be used to inform the monitor
@ -3564,8 +3576,8 @@ void drm_set_preferred_mode(struct drm_connector *connector,
struct drm_display_mode *mode;
list_for_each_entry(mode, &connector->probed_modes, head) {
if (drm_mode_width(mode) == hpref &&
drm_mode_height(mode) == vpref)
if (mode->hdisplay == hpref &&
mode->vdisplay == vpref)
mode->type |= DRM_MODE_TYPE_PREFERRED;
}
}

View File

@ -1141,8 +1141,8 @@ struct drm_display_mode *drm_has_preferred_mode(struct drm_fb_helper_connector *
struct drm_display_mode *mode;
list_for_each_entry(mode, &fb_connector->connector->modes, head) {
if (drm_mode_width(mode) > width ||
drm_mode_height(mode) > height)
if (mode->hdisplay > width ||
mode->vdisplay > height)
continue;
if (mode->type & DRM_MODE_TYPE_PREFERRED)
return mode;

View File

@ -85,9 +85,9 @@
#endif
/**
* Initialize the GEM device fields
* drm_gem_init - Initialize the GEM device fields
* @dev: drm_devic structure to initialize
*/
int
drm_gem_init(struct drm_device *dev)
{
@ -120,6 +120,11 @@ drm_gem_destroy(struct drm_device *dev)
}
/**
* drm_gem_object_init - initialize an allocated shmem-backed GEM object
* @dev: drm_device the object should be initialized for
* @obj: drm_gem_object to initialize
* @size: object size
*
* Initialize an already allocated GEM object of the specified size with
* shmfs backing store.
*/
@ -141,6 +146,11 @@ int drm_gem_object_init(struct drm_device *dev,
EXPORT_SYMBOL(drm_gem_object_init);
/**
* drm_gem_object_init - initialize an allocated private GEM object
* @dev: drm_device the object should be initialized for
* @obj: drm_gem_object to initialize
* @size: object size
*
* Initialize an already allocated GEM object of the specified size with
* no GEM provided backing store. Instead the caller is responsible for
* backing the object and handling it.
@ -176,6 +186,9 @@ drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp)
}
/**
* drm_gem_object_free - release resources bound to userspace handles
* @obj: GEM object to clean up.
*
* Called after the last handle to the object has been closed
*
* Removes any name for the object. Note that this must be
@ -225,7 +238,12 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj)
}
/**
* Removes the mapping from handle to filp for this object.
* drm_gem_handle_delete - deletes the given file-private handle
* @filp: drm file-private structure to use for the handle look up
* @handle: userspace handle to delete
*
* Removes the GEM handle from the @filp lookup table and if this is the last
* handle also cleans up linked resources like GEM names.
*/
int
drm_gem_handle_delete(struct drm_file *filp, u32 handle)
@ -270,6 +288,9 @@ EXPORT_SYMBOL(drm_gem_handle_delete);
/**
* drm_gem_dumb_destroy - dumb fb callback helper for gem based drivers
* @file: drm file-private structure to remove the dumb handle from
* @dev: corresponding drm_device
* @handle: the dumb handle to remove
*
* This implements the ->dumb_destroy kms driver callback for drivers which use
* gem to manage their backing storage.
@ -284,6 +305,9 @@ EXPORT_SYMBOL(drm_gem_dumb_destroy);
/**
* drm_gem_handle_create_tail - internal functions to create a handle
* @file_priv: drm file-private structure to register the handle for
* @obj: object to register
* @handlep: pionter to return the created handle to the caller
*
* This expects the dev->object_name_lock to be held already and will drop it
* before returning. Used to avoid races in establishing new handles when
@ -336,6 +360,11 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
}
/**
* gem_handle_create - create a gem handle for an object
* @file_priv: drm file-private structure to register the handle for
* @obj: object to register
* @handlep: pionter to return the created handle to the caller
*
* Create a handle for this object. This adds a handle reference
* to the object, which includes a regular reference count. Callers
* will likely want to dereference the object afterwards.
@ -536,6 +565,11 @@ drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
EXPORT_SYMBOL(drm_gem_object_lookup);
/**
* drm_gem_close_ioctl - implementation of the GEM_CLOSE ioctl
* @dev: drm_device
* @data: ioctl data
* @file_priv: drm file-private structure
*
* Releases the handle to an mm object.
*/
int
@ -554,6 +588,11 @@ drm_gem_close_ioctl(struct drm_device *dev, void *data,
}
/**
* drm_gem_flink_ioctl - implementation of the GEM_FLINK ioctl
* @dev: drm_device
* @data: ioctl data
* @file_priv: drm file-private structure
*
* Create a global name for an object, returning the name.
*
* Note that the name does not hold a reference; when the object
@ -601,6 +640,11 @@ err:
}
/**
* drm_gem_open - implementation of the GEM_OPEN ioctl
* @dev: drm_device
* @data: ioctl data
* @file_priv: drm file-private structure
*
* Open an object using the global name, returning a handle and the size.
*
* This handle (of course) holds a reference to the object, so the object
@ -640,6 +684,10 @@ drm_gem_open_ioctl(struct drm_device *dev, void *data,
}
/**
* gem_gem_open - initalizes GEM file-private structures at devnode open time
* @dev: drm_device which is being opened by userspace
* @file_private: drm file-private structure to set up
*
* Called at device open time, sets up the structure for handling refcounting
* of mm objects.
*/
@ -650,7 +698,7 @@ drm_gem_open(struct drm_device *dev, struct drm_file *file_private)
spin_lock_init(&file_private->table_lock);
}
/**
/*
* Called at device close to release the file's
* handle references on objects.
*/
@ -674,6 +722,10 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
}
/**
* drm_gem_release - release file-private GEM resources
* @dev: drm_device which is being closed by userspace
* @file_private: drm file-private structure to clean up
*
* Called at close time when the filp is going away.
*
* Releases any remaining references on objects by this filp.
@ -699,6 +751,9 @@ drm_gem_object_release(struct drm_gem_object *obj)
EXPORT_SYMBOL(drm_gem_object_release);
/**
* drm_gem_object_free - free a GEM object
* @kref: kref of the object to free
*
* Called after the last reference to the object has been lost.
* Must be called holding struct_ mutex
*

View File

@ -47,7 +47,44 @@
#include <linux/seq_file.h>
#include <linux/export.h>
#define MM_UNUSED_TARGET 4
/**
* DOC: Overview
*
* drm_mm provides a simple range allocator. The drivers are free to use the
* resource allocator from the linux core if it suits them, the upside of drm_mm
* is that it's in the DRM core. Which means that it's easier to extend for
* some of the crazier special purpose needs of gpus.
*
* The main data struct is &drm_mm, allocations are tracked in &drm_mm_node.
* Drivers are free to embed either of them into their own suitable
* datastructures. drm_mm itself will not do any allocations of its own, so if
* drivers choose not to embed nodes they need to still allocate them
* themselves.
*
* The range allocator also supports reservation of preallocated blocks. This is
* useful for taking over initial mode setting configurations from the firmware,
* where an object needs to be created which exactly matches the firmware's
* scanout target. As long as the range is still free it can be inserted anytime
* after the allocator is initialized, which helps with avoiding looped
* depencies in the driver load sequence.
*
* drm_mm maintains a stack of most recently freed holes, which of all
* simplistic datastructures seems to be a fairly decent approach to clustering
* allocations and avoiding too much fragmentation. This means free space
* searches are O(num_holes). Given that all the fancy features drm_mm supports
* something better would be fairly complex and since gfx thrashing is a fairly
* steep cliff not a real concern. Removing a node again is O(1).
*
* drm_mm supports a few features: Alignment and range restrictions can be
* supplied. Further more every &drm_mm_node has a color value (which is just an
* opaqua unsigned long) which in conjunction with a driver callback can be used
* to implement sophisticated placement restrictions. The i915 DRM driver uses
* this to implement guard pages between incompatible caching domains in the
* graphics TT.
*
* Finally iteration helpers to walk all nodes and all holes are provided as are
* some basic allocator dumpers for debugging.
*/
static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
unsigned long size,
@ -107,6 +144,20 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
}
}
/**
* drm_mm_reserve_node - insert an pre-initialized node
* @mm: drm_mm allocator to insert @node into
* @node: drm_mm_node to insert
*
* This functions inserts an already set-up drm_mm_node into the allocator,
* meaning that start, size and color must be set by the caller. This is useful
* to initialize the allocator with preallocated objects which must be set-up
* before the range allocator can be set-up, e.g. when taking over a firmware
* framebuffer.
*
* Returns:
* 0 on success, -ENOSPC if there's no hole where @node is.
*/
int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
{
struct drm_mm_node *hole;
@ -148,9 +199,18 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
EXPORT_SYMBOL(drm_mm_reserve_node);
/**
* Search for free space and insert a preallocated memory node. Returns
* -ENOSPC if no suitable free area is available. The preallocated memory node
* must be cleared.
* drm_mm_insert_node_generic - search for space and insert @node
* @mm: drm_mm to allocate from
* @node: preallocate node to insert
* @size: size of the allocation
* @alignment: alignment of the allocation
* @color: opaque tag value to use for this node
* @flags: flags to fine-tune the allocation
*
* The preallocated node must be cleared to 0.
*
* Returns:
* 0 on success, -ENOSPC if there's no suitable hole.
*/
int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
unsigned long size, unsigned alignment,
@ -222,9 +282,20 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
}
/**
* Search for free space and insert a preallocated memory node. Returns
* -ENOSPC if no suitable free area is available. This is for range
* restricted allocations. The preallocated memory node must be cleared.
* drm_mm_insert_node_in_range_generic - ranged search for space and insert @node
* @mm: drm_mm to allocate from
* @node: preallocate node to insert
* @size: size of the allocation
* @alignment: alignment of the allocation
* @color: opaque tag value to use for this node
* @start: start of the allowed range for this node
* @end: end of the allowed range for this node
* @flags: flags to fine-tune the allocation
*
* The preallocated node must be cleared to 0.
*
* Returns:
* 0 on success, -ENOSPC if there's no suitable hole.
*/
int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node,
unsigned long size, unsigned alignment, unsigned long color,
@ -247,7 +318,12 @@ int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *n
EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic);
/**
* Remove a memory node from the allocator.
* drm_mm_remove_node - Remove a memory node from the allocator.
* @node: drm_mm_node to remove
*
* This just removes a node from its drm_mm allocator. The node does not need to
* be cleared again before it can be re-inserted into this or any other drm_mm
* allocator. It is a bug to call this function on a un-allocated node.
*/
void drm_mm_remove_node(struct drm_mm_node *node)
{
@ -384,7 +460,13 @@ static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_
}
/**
* Moves an allocation. To be used with embedded struct drm_mm_node.
* drm_mm_replace_node - move an allocation from @old to @new
* @old: drm_mm_node to remove from the allocator
* @new: drm_mm_node which should inherit @old's allocation
*
* This is useful for when drivers embed the drm_mm_node structure and hence
* can't move allocations by reassigning pointers. It's a combination of remove
* and insert with the guarantee that the allocation start will match.
*/
void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
{
@ -402,12 +484,46 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
EXPORT_SYMBOL(drm_mm_replace_node);
/**
* Initializa lru scanning.
* DOC: lru scan roaster
*
* Very often GPUs need to have continuous allocations for a given object. When
* evicting objects to make space for a new one it is therefore not most
* efficient when we simply start to select all objects from the tail of an LRU
* until there's a suitable hole: Especially for big objects or nodes that
* otherwise have special allocation constraints there's a good chance we evict
* lots of (smaller) objects unecessarily.
*
* The DRM range allocator supports this use-case through the scanning
* interfaces. First a scan operation needs to be initialized with
* drm_mm_init_scan() or drm_mm_init_scan_with_range(). The the driver adds
* objects to the roaster (probably by walking an LRU list, but this can be
* freely implemented) until a suitable hole is found or there's no further
* evitable object.
*
* The the driver must walk through all objects again in exactly the reverse
* order to restore the allocator state. Note that while the allocator is used
* in the scan mode no other operation is allowed.
*
* Finally the driver evicts all objects selected in the scan. Adding and
* removing an object is O(1), and since freeing a node is also O(1) the overall
* complexity is O(scanned_objects). So like the free stack which needs to be
* walked before a scan operation even begins this is linear in the number of
* objects. It doesn't seem to hurt badly.
*/
/**
* drm_mm_init_scan - initialize lru scanning
* @mm: drm_mm to scan
* @size: size of the allocation
* @alignment: alignment of the allocation
* @color: opaque tag value to use for the allocation
*
* This simply sets up the scanning routines with the parameters for the desired
* hole.
* hole. Note that there's no need to specify allocation flags, since they only
* change the place a node is allocated from within a suitable hole.
*
* Warning: As long as the scan list is non-empty, no other operations than
* Warning:
* As long as the scan list is non-empty, no other operations than
* adding/removing nodes to/from the scan list are allowed.
*/
void drm_mm_init_scan(struct drm_mm *mm,
@ -427,12 +543,20 @@ void drm_mm_init_scan(struct drm_mm *mm,
EXPORT_SYMBOL(drm_mm_init_scan);
/**
* Initializa lru scanning.
* drm_mm_init_scan - initialize range-restricted lru scanning
* @mm: drm_mm to scan
* @size: size of the allocation
* @alignment: alignment of the allocation
* @color: opaque tag value to use for the allocation
* @start: start of the allowed range for the allocation
* @end: end of the allowed range for the allocation
*
* This simply sets up the scanning routines with the parameters for the desired
* hole. This version is for range-restricted scans.
* hole. Note that there's no need to specify allocation flags, since they only
* change the place a node is allocated from within a suitable hole.
*
* Warning: As long as the scan list is non-empty, no other operations than
* Warning:
* As long as the scan list is non-empty, no other operations than
* adding/removing nodes to/from the scan list are allowed.
*/
void drm_mm_init_scan_with_range(struct drm_mm *mm,
@ -456,12 +580,16 @@ void drm_mm_init_scan_with_range(struct drm_mm *mm,
EXPORT_SYMBOL(drm_mm_init_scan_with_range);
/**
* drm_mm_scan_add_block - add a node to the scan list
* @node: drm_mm_node to add
*
* Add a node to the scan list that might be freed to make space for the desired
* hole.
*
* Returns non-zero, if a hole has been found, zero otherwise.
* Returns:
* True if a hole has been found, false otherwise.
*/
int drm_mm_scan_add_block(struct drm_mm_node *node)
bool drm_mm_scan_add_block(struct drm_mm_node *node)
{
struct drm_mm *mm = node->mm;
struct drm_mm_node *prev_node;
@ -501,15 +629,16 @@ int drm_mm_scan_add_block(struct drm_mm_node *node)
mm->scan_size, mm->scan_alignment)) {
mm->scan_hit_start = hole_start;
mm->scan_hit_end = hole_end;
return 1;
return true;
}
return 0;
return false;
}
EXPORT_SYMBOL(drm_mm_scan_add_block);
/**
* Remove a node from the scan list.
* drm_mm_scan_remove_block - remove a node from the scan list
* @node: drm_mm_node to remove
*
* Nodes _must_ be removed in the exact same order from the scan list as they
* have been added, otherwise the internal state of the memory manager will be
@ -519,10 +648,11 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
* immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then
* return the just freed block (because its at the top of the free_stack list).
*
* Returns one if this block should be evicted, zero otherwise. Will always
* return zero when no hole has been found.
* Returns:
* True if this block should be evicted, false otherwise. Will always
* return false when no hole has been found.
*/
int drm_mm_scan_remove_block(struct drm_mm_node *node)
bool drm_mm_scan_remove_block(struct drm_mm_node *node)
{
struct drm_mm *mm = node->mm;
struct drm_mm_node *prev_node;
@ -543,7 +673,15 @@ int drm_mm_scan_remove_block(struct drm_mm_node *node)
}
EXPORT_SYMBOL(drm_mm_scan_remove_block);
int drm_mm_clean(struct drm_mm * mm)
/**
* drm_mm_clean - checks whether an allocator is clean
* @mm: drm_mm allocator to check
*
* Returns:
* True if the allocator is completely free, false if there's still a node
* allocated in it.
*/
bool drm_mm_clean(struct drm_mm * mm)
{
struct list_head *head = &mm->head_node.node_list;
@ -551,6 +689,14 @@ int drm_mm_clean(struct drm_mm * mm)
}
EXPORT_SYMBOL(drm_mm_clean);
/**
* drm_mm_init - initialize a drm-mm allocator
* @mm: the drm_mm structure to initialize
* @start: start of the range managed by @mm
* @size: end of the range managed by @mm
*
* Note that @mm must be cleared to 0 before calling this function.
*/
void drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size)
{
INIT_LIST_HEAD(&mm->hole_stack);
@ -572,6 +718,13 @@ void drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size)
}
EXPORT_SYMBOL(drm_mm_init);
/**
* drm_mm_takedown - clean up a drm_mm allocator
* @mm: drm_mm allocator to clean up
*
* Note that it is a bug to call this function on an allocator which is not
* clean.
*/
void drm_mm_takedown(struct drm_mm * mm)
{
WARN(!list_empty(&mm->head_node.node_list),
@ -597,6 +750,11 @@ static unsigned long drm_mm_debug_hole(struct drm_mm_node *entry,
return 0;
}
/**
* drm_mm_debug_table - dump allocator state to dmesg
* @mm: drm_mm allocator to dump
* @prefix: prefix to use for dumping to dmesg
*/
void drm_mm_debug_table(struct drm_mm *mm, const char *prefix)
{
struct drm_mm_node *entry;
@ -635,6 +793,11 @@ static unsigned long drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *en
return 0;
}
/**
* drm_mm_dump_table - dump allocator state to a seq_file
* @m: seq_file to dump to
* @mm: drm_mm allocator to dump
*/
int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm)
{
struct drm_mm_node *entry;

View File

@ -37,15 +37,14 @@
#include <drm/drm_crtc.h>
#include <video/of_videomode.h>
#include <video/videomode.h>
#include <drm/drm_modes.h>
#include "drm_crtc_internal.h"
/**
* drm_mode_debug_printmodeline - debug print a mode
* @dev: DRM device
* drm_mode_debug_printmodeline - print a mode to dmesg
* @mode: mode to print
*
* LOCKING:
* None.
*
* Describe @mode using DRM_DEBUG.
*/
void drm_mode_debug_printmodeline(const struct drm_display_mode *mode)
@ -61,18 +60,77 @@ void drm_mode_debug_printmodeline(const struct drm_display_mode *mode)
EXPORT_SYMBOL(drm_mode_debug_printmodeline);
/**
* drm_cvt_mode -create a modeline based on CVT algorithm
* drm_mode_create - create a new display mode
* @dev: DRM device
*
* Create a new, cleared drm_display_mode with kzalloc, allocate an ID for it
* and return it.
*
* Returns:
* Pointer to new mode on success, NULL on error.
*/
struct drm_display_mode *drm_mode_create(struct drm_device *dev)
{
struct drm_display_mode *nmode;
nmode = kzalloc(sizeof(struct drm_display_mode), GFP_KERNEL);
if (!nmode)
return NULL;
if (drm_mode_object_get(dev, &nmode->base, DRM_MODE_OBJECT_MODE)) {
kfree(nmode);
return NULL;
}
return nmode;
}
EXPORT_SYMBOL(drm_mode_create);
/**
* drm_mode_destroy - remove a mode
* @dev: DRM device
* @mode: mode to remove
*
* Release @mode's unique ID, then free it @mode structure itself using kfree.
*/
void drm_mode_destroy(struct drm_device *dev, struct drm_display_mode *mode)
{
if (!mode)
return;
drm_mode_object_put(dev, &mode->base);
kfree(mode);
}
EXPORT_SYMBOL(drm_mode_destroy);
/**
* drm_mode_probed_add - add a mode to a connector's probed_mode list
* @connector: connector the new mode
* @mode: mode data
*
* Add @mode to @connector's probed_mode list for later use. This list should
* then in a second step get filtered and all the modes actually supported by
* the hardware moved to the @connector's modes list.
*/
void drm_mode_probed_add(struct drm_connector *connector,
struct drm_display_mode *mode)
{
WARN_ON(!mutex_is_locked(&connector->dev->mode_config.mutex));
list_add_tail(&mode->head, &connector->probed_modes);
}
EXPORT_SYMBOL(drm_mode_probed_add);
/**
* drm_cvt_mode -create a modeline based on the CVT algorithm
* @dev: drm device
* @hdisplay: hdisplay size
* @vdisplay: vdisplay size
* @vrefresh : vrefresh rate
* @reduced : Whether the GTF calculation is simplified
* @interlaced:Whether the interlace is supported
*
* LOCKING:
* none.
*
* return the modeline based on CVT algorithm
* @vrefresh: vrefresh rate
* @reduced: whether to use reduced blanking
* @interlaced: whether to compute an interlaced mode
* @margins: whether to add margins (borders)
*
* This function is called to generate the modeline based on CVT algorithm
* according to the hdisplay, vdisplay, vrefresh.
@ -82,12 +140,17 @@ EXPORT_SYMBOL(drm_mode_debug_printmodeline);
*
* And it is copied from xf86CVTmode in xserver/hw/xfree86/modes/xf86cvt.c.
* What I have done is to translate it by using integer calculation.
*
* Returns:
* The modeline based on the CVT algorithm stored in a drm_display_mode object.
* The display mode object is allocated with drm_mode_create(). Returns NULL
* when no mode could be allocated.
*/
#define HV_FACTOR 1000
struct drm_display_mode *drm_cvt_mode(struct drm_device *dev, int hdisplay,
int vdisplay, int vrefresh,
bool reduced, bool interlaced, bool margins)
{
#define HV_FACTOR 1000
/* 1) top/bottom margin size (% of height) - default: 1.8, */
#define CVT_MARGIN_PERCENTAGE 18
/* 2) character cell horizontal granularity (pixels) - default 8 */
@ -281,23 +344,25 @@ struct drm_display_mode *drm_cvt_mode(struct drm_device *dev, int hdisplay,
EXPORT_SYMBOL(drm_cvt_mode);
/**
* drm_gtf_mode_complex - create the modeline based on full GTF algorithm
*
* @dev :drm device
* @hdisplay :hdisplay size
* @vdisplay :vdisplay size
* @vrefresh :vrefresh rate.
* @interlaced :whether the interlace is supported
* @margins :desired margin size
* @GTF_[MCKJ] :extended GTF formula parameters
*
* LOCKING.
* none.
*
* return the modeline based on full GTF algorithm.
* drm_gtf_mode_complex - create the modeline based on the full GTF algorithm
* @dev: drm device
* @hdisplay: hdisplay size
* @vdisplay: vdisplay size
* @vrefresh: vrefresh rate.
* @interlaced: whether to compute an interlaced mode
* @margins: desired margin (borders) size
* @GTF_M: extended GTF formula parameters
* @GTF_2C: extended GTF formula parameters
* @GTF_K: extended GTF formula parameters
* @GTF_2J: extended GTF formula parameters
*
* GTF feature blocks specify C and J in multiples of 0.5, so we pass them
* in here multiplied by two. For a C of 40, pass in 80.
*
* Returns:
* The modeline based on the full GTF algorithm stored in a drm_display_mode object.
* The display mode object is allocated with drm_mode_create(). Returns NULL
* when no mode could be allocated.
*/
struct drm_display_mode *
drm_gtf_mode_complex(struct drm_device *dev, int hdisplay, int vdisplay,
@ -467,17 +532,13 @@ drm_gtf_mode_complex(struct drm_device *dev, int hdisplay, int vdisplay,
EXPORT_SYMBOL(drm_gtf_mode_complex);
/**
* drm_gtf_mode - create the modeline based on GTF algorithm
*
* @dev :drm device
* @hdisplay :hdisplay size
* @vdisplay :vdisplay size
* @vrefresh :vrefresh rate.
* @interlaced :whether the interlace is supported
* @margins :whether the margin is supported
*
* LOCKING.
* none.
* drm_gtf_mode - create the modeline based on the GTF algorithm
* @dev: drm device
* @hdisplay: hdisplay size
* @vdisplay: vdisplay size
* @vrefresh: vrefresh rate.
* @interlaced: whether to compute an interlaced mode
* @margins: desired margin (borders) size
*
* return the modeline based on GTF algorithm
*
@ -496,19 +557,32 @@ EXPORT_SYMBOL(drm_gtf_mode_complex);
* C = 40
* K = 128
* J = 20
*
* Returns:
* The modeline based on the GTF algorithm stored in a drm_display_mode object.
* The display mode object is allocated with drm_mode_create(). Returns NULL
* when no mode could be allocated.
*/
struct drm_display_mode *
drm_gtf_mode(struct drm_device *dev, int hdisplay, int vdisplay, int vrefresh,
bool lace, int margins)
bool interlaced, int margins)
{
return drm_gtf_mode_complex(dev, hdisplay, vdisplay, vrefresh, lace,
margins, 600, 40 * 2, 128, 20 * 2);
return drm_gtf_mode_complex(dev, hdisplay, vdisplay, vrefresh,
interlaced, margins,
600, 40 * 2, 128, 20 * 2);
}
EXPORT_SYMBOL(drm_gtf_mode);
#ifdef CONFIG_VIDEOMODE_HELPERS
int drm_display_mode_from_videomode(const struct videomode *vm,
struct drm_display_mode *dmode)
/**
* drm_display_mode_from_videomode - fill in @dmode using @vm,
* @vm: videomode structure to use as source
* @dmode: drm_display_mode structure to use as destination
*
* Fills out @dmode using the display mode specified in @vm.
*/
void drm_display_mode_from_videomode(const struct videomode *vm,
struct drm_display_mode *dmode)
{
dmode->hdisplay = vm->hactive;
dmode->hsync_start = dmode->hdisplay + vm->hfront_porch;
@ -538,8 +612,6 @@ int drm_display_mode_from_videomode(const struct videomode *vm,
if (vm->flags & DISPLAY_FLAGS_DOUBLECLK)
dmode->flags |= DRM_MODE_FLAG_DBLCLK;
drm_mode_set_name(dmode);
return 0;
}
EXPORT_SYMBOL_GPL(drm_display_mode_from_videomode);
@ -553,6 +625,9 @@ EXPORT_SYMBOL_GPL(drm_display_mode_from_videomode);
* This function is expensive and should only be used, if only one mode is to be
* read from DT. To get multiple modes start with of_get_display_timings and
* work with that instead.
*
* Returns:
* 0 on success, a negative errno code when no of videomode node was found.
*/
int of_get_drm_display_mode(struct device_node *np,
struct drm_display_mode *dmode, int index)
@ -580,10 +655,8 @@ EXPORT_SYMBOL_GPL(of_get_drm_display_mode);
* drm_mode_set_name - set the name on a mode
* @mode: name will be set in this mode
*
* LOCKING:
* None.
*
* Set the name of @mode to a standard format.
* Set the name of @mode to a standard format which is <hdisplay>x<vdisplay>
* with an optional 'i' suffix for interlaced modes.
*/
void drm_mode_set_name(struct drm_display_mode *mode)
{
@ -595,54 +668,12 @@ void drm_mode_set_name(struct drm_display_mode *mode)
}
EXPORT_SYMBOL(drm_mode_set_name);
/**
* drm_mode_width - get the width of a mode
* @mode: mode
*
* LOCKING:
* None.
*
* Return @mode's width (hdisplay) value.
*
* FIXME: is this needed?
*
* RETURNS:
* @mode->hdisplay
*/
int drm_mode_width(const struct drm_display_mode *mode)
{
return mode->hdisplay;
}
EXPORT_SYMBOL(drm_mode_width);
/**
* drm_mode_height - get the height of a mode
* @mode: mode
*
* LOCKING:
* None.
*
* Return @mode's height (vdisplay) value.
*
* FIXME: is this needed?
*
* RETURNS:
* @mode->vdisplay
*/
int drm_mode_height(const struct drm_display_mode *mode)
{
return mode->vdisplay;
}
EXPORT_SYMBOL(drm_mode_height);
/** drm_mode_hsync - get the hsync of a mode
* @mode: mode
*
* LOCKING:
* None.
*
* Return @modes's hsync rate in kHz, rounded to the nearest int.
* Returns:
* @modes's hsync rate in kHz, rounded to the nearest integer. Calculates the
* value first if it is not yet set.
*/
int drm_mode_hsync(const struct drm_display_mode *mode)
{
@ -666,17 +697,9 @@ EXPORT_SYMBOL(drm_mode_hsync);
* drm_mode_vrefresh - get the vrefresh of a mode
* @mode: mode
*
* LOCKING:
* None.
*
* Return @mode's vrefresh rate in Hz or calculate it if necessary.
*
* FIXME: why is this needed? shouldn't vrefresh be set already?
*
* RETURNS:
* Vertical refresh rate. It will be the result of actual value plus 0.5.
* If it is 70.288, it will return 70Hz.
* If it is 59.6, it will return 60Hz.
* Returns:
* @modes's vrefresh rate in Hz, rounded to the nearest integer. Calculates the
* value first if it is not yet set.
*/
int drm_mode_vrefresh(const struct drm_display_mode *mode)
{
@ -705,14 +728,11 @@ int drm_mode_vrefresh(const struct drm_display_mode *mode)
EXPORT_SYMBOL(drm_mode_vrefresh);
/**
* drm_mode_set_crtcinfo - set CRTC modesetting parameters
* drm_mode_set_crtcinfo - set CRTC modesetting timing parameters
* @p: mode
* @adjust_flags: a combination of adjustment flags
*
* LOCKING:
* None.
*
* Setup the CRTC modesetting parameters for @p, adjusting if necessary.
* Setup the CRTC modesetting timing parameters for @p, adjusting if necessary.
*
* - The CRTC_INTERLACE_HALVE_V flag can be used to halve vertical timings of
* interlaced modes.
@ -780,15 +800,11 @@ void drm_mode_set_crtcinfo(struct drm_display_mode *p, int adjust_flags)
}
EXPORT_SYMBOL(drm_mode_set_crtcinfo);
/**
* drm_mode_copy - copy the mode
* @dst: mode to overwrite
* @src: mode to copy
*
* LOCKING:
* None.
*
* Copy an existing mode into another mode, preserving the object id and
* list head of the destination mode.
*/
@ -805,13 +821,14 @@ EXPORT_SYMBOL(drm_mode_copy);
/**
* drm_mode_duplicate - allocate and duplicate an existing mode
* @m: mode to duplicate
*
* LOCKING:
* None.
* @dev: drm_device to allocate the duplicated mode for
* @mode: mode to duplicate
*
* Just allocate a new mode, copy the existing mode into it, and return
* a pointer to it. Used to create new instances of established modes.
*
* Returns:
* Pointer to duplicated mode on success, NULL on error.
*/
struct drm_display_mode *drm_mode_duplicate(struct drm_device *dev,
const struct drm_display_mode *mode)
@ -833,12 +850,9 @@ EXPORT_SYMBOL(drm_mode_duplicate);
* @mode1: first mode
* @mode2: second mode
*
* LOCKING:
* None.
*
* Check to see if @mode1 and @mode2 are equivalent.
*
* RETURNS:
* Returns:
* True if the modes are equal, false otherwise.
*/
bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_display_mode *mode2)
@ -864,13 +878,10 @@ EXPORT_SYMBOL(drm_mode_equal);
* @mode1: first mode
* @mode2: second mode
*
* LOCKING:
* None.
*
* Check to see if @mode1 and @mode2 are equivalent, but
* don't check the pixel clocks nor the stereo layout.
*
* RETURNS:
* Returns:
* True if the modes are equal, false otherwise.
*/
bool drm_mode_equal_no_clocks_no_stereo(const struct drm_display_mode *mode1,
@ -900,25 +911,19 @@ EXPORT_SYMBOL(drm_mode_equal_no_clocks_no_stereo);
* @mode_list: list of modes to check
* @maxX: maximum width
* @maxY: maximum height
* @maxPitch: max pitch
*
* LOCKING:
* Caller must hold a lock protecting @mode_list.
*
* The DRM device (@dev) has size and pitch limits. Here we validate the
* modes we probed for @dev against those limits and set their status as
* necessary.
* This function is a helper which can be used to validate modes against size
* limitations of the DRM device/connector. If a mode is too big its status
* memeber is updated with the appropriate validation failure code. The list
* itself is not changed.
*/
void drm_mode_validate_size(struct drm_device *dev,
struct list_head *mode_list,
int maxX, int maxY, int maxPitch)
int maxX, int maxY)
{
struct drm_display_mode *mode;
list_for_each_entry(mode, mode_list, head) {
if (maxPitch > 0 && mode->hdisplay > maxPitch)
mode->status = MODE_BAD_WIDTH;
if (maxX > 0 && mode->hdisplay > maxX)
mode->status = MODE_VIRTUAL_X;
@ -934,12 +939,10 @@ EXPORT_SYMBOL(drm_mode_validate_size);
* @mode_list: list of modes to check
* @verbose: be verbose about it
*
* LOCKING:
* Caller must hold a lock protecting @mode_list.
*
* Once mode list generation is complete, a caller can use this routine to
* remove invalid modes from a mode list. If any of the modes have a
* status other than %MODE_OK, they are removed from @mode_list and freed.
* This helper function can be used to prune a display mode list after
* validation has been completed. All modes who's status is not MODE_OK will be
* removed from the list, and if @verbose the status code and mode name is also
* printed to dmesg.
*/
void drm_mode_prune_invalid(struct drm_device *dev,
struct list_head *mode_list, bool verbose)
@ -966,13 +969,10 @@ EXPORT_SYMBOL(drm_mode_prune_invalid);
* @lh_a: list_head for first mode
* @lh_b: list_head for second mode
*
* LOCKING:
* None.
*
* Compare two modes, given by @lh_a and @lh_b, returning a value indicating
* which is better.
*
* RETURNS:
* Returns:
* Negative if @lh_a is better than @lh_b, zero if they're equivalent, or
* positive if @lh_b is better than @lh_a.
*/
@ -1000,12 +1000,9 @@ static int drm_mode_compare(void *priv, struct list_head *lh_a, struct list_head
/**
* drm_mode_sort - sort mode list
* @mode_list: list to sort
* @mode_list: list of drm_display_mode structures to sort
*
* LOCKING:
* Caller must hold a lock protecting @mode_list.
*
* Sort @mode_list by favorability, putting good modes first.
* Sort @mode_list by favorability, moving good modes to the head of the list.
*/
void drm_mode_sort(struct list_head *mode_list)
{
@ -1017,13 +1014,12 @@ EXPORT_SYMBOL(drm_mode_sort);
* drm_mode_connector_list_update - update the mode list for the connector
* @connector: the connector to update
*
* LOCKING:
* Caller must hold a lock protecting @mode_list.
*
* This moves the modes from the @connector probed_modes list
* to the actual mode list. It compares the probed mode against the current
* list and only adds different modes. All modes unverified after this point
* will be removed by the prune invalid modes.
* list and only adds different/new modes.
*
* This is just a helper functions doesn't validate any modes itself and also
* doesn't prune any invalid modes. Callers need to do that themselves.
*/
void drm_mode_connector_list_update(struct drm_connector *connector)
{
@ -1031,6 +1027,8 @@ void drm_mode_connector_list_update(struct drm_connector *connector)
struct drm_display_mode *pmode, *pt;
int found_it;
WARN_ON(!mutex_is_locked(&connector->dev->mode_config.mutex));
list_for_each_entry_safe(pmode, pt, &connector->probed_modes,
head) {
found_it = 0;
@ -1056,17 +1054,25 @@ void drm_mode_connector_list_update(struct drm_connector *connector)
EXPORT_SYMBOL(drm_mode_connector_list_update);
/**
* drm_mode_parse_command_line_for_connector - parse command line for connector
* @mode_option - per connector mode option
* @connector - connector to parse line for
* drm_mode_parse_command_line_for_connector - parse command line modeline for connector
* @mode_option: optional per connector mode option
* @connector: connector to parse modeline for
* @mode: preallocated drm_cmdline_mode structure to fill out
*
* This parses the connector specific then generic command lines for
* modes and options to configure the connector.
* This parses @mode_option command line modeline for modes and options to
* configure the connector. If @mode_option is NULL the default command line
* modeline in fb_mode_option will be parsed instead.
*
* This uses the same parameters as the fb modedb.c, except for an extra
* force-enable, force-enable-digital and force-disable bit at the end:
*
* This uses the same parameters as the fb modedb.c, except for extra
* <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][m][eDd]
*
* enable/enable Digital/disable bit at the end
* The intermediate drm_cmdline_mode structure is required to store additional
* options from the command line modline like the force-enabel/disable flag.
*
* Returns:
* True if a valid modeline has been parsed, false otherwise.
*/
bool drm_mode_parse_command_line_for_connector(const char *mode_option,
struct drm_connector *connector,
@ -1219,6 +1225,14 @@ done:
}
EXPORT_SYMBOL(drm_mode_parse_command_line_for_connector);
/**
* drm_mode_create_from_cmdline_mode - convert a command line modeline into a DRM display mode
* @dev: DRM device to create the new mode for
* @cmd: input command line modeline
*
* Returns:
* Pointer to converted mode on success, NULL on error.
*/
struct drm_display_mode *
drm_mode_create_from_cmdline_mode(struct drm_device *dev,
struct drm_cmdline_mode *cmd)

View File

@ -68,7 +68,8 @@ struct drm_prime_attachment {
enum dma_data_direction dir;
};
static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle)
static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv,
struct dma_buf *dma_buf, uint32_t handle)
{
struct drm_prime_member *member;
@ -174,7 +175,7 @@ void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpr
}
static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
enum dma_data_direction dir)
enum dma_data_direction dir)
{
struct drm_prime_attachment *prime_attach = attach->priv;
struct drm_gem_object *obj = attach->dmabuf->priv;
@ -211,11 +212,19 @@ static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
}
static void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
struct sg_table *sgt, enum dma_data_direction dir)
struct sg_table *sgt,
enum dma_data_direction dir)
{
/* nothing to be done here */
}
/**
* drm_gem_dmabuf_release - dma_buf release implementation for GEM
* @dma_buf: buffer to be released
*
* Generic release function for dma_bufs exported as PRIME buffers. GEM drivers
* must use this in their dma_buf ops structure as the release callback.
*/
void drm_gem_dmabuf_release(struct dma_buf *dma_buf)
{
struct drm_gem_object *obj = dma_buf->priv;
@ -242,30 +251,30 @@ static void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
}
static void *drm_gem_dmabuf_kmap_atomic(struct dma_buf *dma_buf,
unsigned long page_num)
unsigned long page_num)
{
return NULL;
}
static void drm_gem_dmabuf_kunmap_atomic(struct dma_buf *dma_buf,
unsigned long page_num, void *addr)
unsigned long page_num, void *addr)
{
}
static void *drm_gem_dmabuf_kmap(struct dma_buf *dma_buf,
unsigned long page_num)
unsigned long page_num)
{
return NULL;
}
static void drm_gem_dmabuf_kunmap(struct dma_buf *dma_buf,
unsigned long page_num, void *addr)
unsigned long page_num, void *addr)
{
}
static int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf,
struct vm_area_struct *vma)
struct vm_area_struct *vma)
{
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
@ -315,6 +324,15 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = {
* driver's scatter/gather table
*/
/**
* drm_gem_prime_export - helper library implemention of the export callback
* @dev: drm_device to export from
* @obj: GEM object to export
* @flags: flags like DRM_CLOEXEC
*
* This is the implementation of the gem_prime_export functions for GEM drivers
* using the PRIME helpers.
*/
struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
struct drm_gem_object *obj, int flags)
{
@ -355,9 +373,23 @@ static struct dma_buf *export_and_register_object(struct drm_device *dev,
return dmabuf;
}
/**
* drm_gem_prime_handle_to_fd - PRIME export function for GEM drivers
* @dev: dev to export the buffer from
* @file_priv: drm file-private structure
* @handle: buffer handle to export
* @flags: flags like DRM_CLOEXEC
* @prime_fd: pointer to storage for the fd id of the create dma-buf
*
* This is the PRIME export function which must be used mandatorily by GEM
* drivers to ensure correct lifetime management of the underlying GEM object.
* The actual exporting from GEM object to a dma-buf is done through the
* gem_prime_export driver callback.
*/
int drm_gem_prime_handle_to_fd(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle, uint32_t flags,
int *prime_fd)
struct drm_file *file_priv, uint32_t handle,
uint32_t flags,
int *prime_fd)
{
struct drm_gem_object *obj;
int ret = 0;
@ -441,6 +473,14 @@ out_unlock:
}
EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
/**
* drm_gem_prime_import - helper library implemention of the import callback
* @dev: drm_device to import into
* @dma_buf: dma-buf object to import
*
* This is the implementation of the gem_prime_import functions for GEM drivers
* using the PRIME helpers.
*/
struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf)
{
@ -496,8 +536,21 @@ fail_detach:
}
EXPORT_SYMBOL(drm_gem_prime_import);
/**
* drm_gem_prime_fd_to_handle - PRIME import function for GEM drivers
* @dev: dev to export the buffer from
* @file_priv: drm file-private structure
* @prime_fd: fd id of the dma-buf which should be imported
* @handle: pointer to storage for the handle of the imported buffer object
*
* This is the PRIME import function which must be used mandatorily by GEM
* drivers to ensure correct lifetime management of the underlying GEM object.
* The actual importing of GEM object from the dma-buf is done through the
* gem_import_export driver callback.
*/
int drm_gem_prime_fd_to_handle(struct drm_device *dev,
struct drm_file *file_priv, int prime_fd, uint32_t *handle)
struct drm_file *file_priv, int prime_fd,
uint32_t *handle)
{
struct dma_buf *dma_buf;
struct drm_gem_object *obj;
@ -598,12 +651,14 @@ int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
args->fd, &args->handle);
}
/*
* drm_prime_pages_to_sg
/**
* drm_prime_pages_to_sg - converts a page array into an sg list
* @pages: pointer to the array of page pointers to convert
* @nr_pages: length of the page vector
*
* this helper creates an sg table object from a set of pages
* This helper creates an sg table object from a set of pages
* the driver is responsible for mapping the pages into the
* importers address space
* importers address space for use with dma_buf itself.
*/
struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages)
{
@ -628,9 +683,16 @@ out:
}
EXPORT_SYMBOL(drm_prime_pages_to_sg);
/* export an sg table into an array of pages and addresses
this is currently required by the TTM driver in order to do correct fault
handling */
/**
* drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
* @sgt: scatter-gather table to convert
* @pages: array of page pointers to store the page array in
* @addrs: optional array to store the dma bus address of each page
* @max_pages: size of both the passed-in arrays
*
* Exports an sg table into an array of pages and addresses. This is currently
* required by the TTM driver in order to do correct fault handling.
*/
int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
dma_addr_t *addrs, int max_pages)
{
@ -663,7 +725,15 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
return 0;
}
EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
/* helper function to cleanup a GEM/prime object */
/**
* drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
* @obj: GEM object which was created from a dma-buf
* @sg: the sg-table which was pinned at import time
*
* This is the cleanup functions which GEM drivers need to call when they use
* @drm_gem_prime_import to import dma-bufs.
*/
void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg)
{
struct dma_buf_attachment *attach;
@ -683,11 +753,9 @@ void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv)
INIT_LIST_HEAD(&prime_fpriv->head);
mutex_init(&prime_fpriv->lock);
}
EXPORT_SYMBOL(drm_prime_init_file_private);
void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv)
{
/* by now drm_gem_release should've made sure the list is empty */
WARN_ON(!list_empty(&prime_fpriv->head));
}
EXPORT_SYMBOL(drm_prime_destroy_file_private);

View File

@ -1883,7 +1883,6 @@ static int imx_hdmi_platform_remove(struct platform_device *pdev)
struct drm_connector *connector = &hdmi->connector;
struct drm_encoder *encoder = &hdmi->encoder;
drm_mode_connector_detach_encoder(connector, encoder);
imx_drm_remove_connector(hdmi->imx_drm_connector);
imx_drm_remove_encoder(hdmi->imx_drm_encoder);

View File

@ -595,8 +595,6 @@ static int imx_ldb_remove(struct platform_device *pdev)
struct drm_connector *connector = &channel->connector;
struct drm_encoder *encoder = &channel->encoder;
drm_mode_connector_detach_encoder(connector, encoder);
imx_drm_remove_connector(channel->imx_drm_connector);
imx_drm_remove_encoder(channel->imx_drm_encoder);
}

View File

@ -709,8 +709,6 @@ static int imx_tve_remove(struct platform_device *pdev)
struct drm_connector *connector = &tve->connector;
struct drm_encoder *encoder = &tve->encoder;
drm_mode_connector_detach_encoder(connector, encoder);
imx_drm_remove_connector(tve->imx_drm_connector);
imx_drm_remove_encoder(tve->imx_drm_encoder);

View File

@ -244,8 +244,6 @@ static int imx_pd_remove(struct platform_device *pdev)
struct drm_connector *connector = &imxpd->connector;
struct drm_encoder *encoder = &imxpd->encoder;
drm_mode_connector_detach_encoder(connector, encoder);
imx_drm_remove_connector(imxpd->imx_drm_connector);
imx_drm_remove_encoder(imxpd->imx_drm_encoder);

View File

@ -1056,21 +1056,6 @@ struct drm_minor {
struct drm_mode_group mode_group;
};
/* mode specified on the command line */
struct drm_cmdline_mode {
bool specified;
bool refresh_specified;
bool bpp_specified;
int xres, yres;
int bpp;
int refresh;
bool rb;
bool interlace;
bool cvt;
bool margins;
enum drm_connector_force force;
};
struct drm_pending_vblank_event {
struct drm_pending_event base;
@ -1417,20 +1402,6 @@ extern int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
extern void drm_calc_timestamping_constants(struct drm_crtc *crtc,
const struct drm_display_mode *mode);
extern bool
drm_mode_parse_command_line_for_connector(const char *mode_option,
struct drm_connector *connector,
struct drm_cmdline_mode *mode);
extern struct drm_display_mode *
drm_mode_create_from_cmdline_mode(struct drm_device *dev,
struct drm_cmdline_mode *cmd);
extern int drm_display_mode_from_videomode(const struct videomode *vm,
struct drm_display_mode *dmode);
extern int of_get_drm_display_mode(struct device_node *np,
struct drm_display_mode *dmode,
int index);
/* Modesetting support */
extern void drm_vblank_pre_modeset(struct drm_device *dev, int crtc);

View File

@ -32,7 +32,6 @@
#include <linux/fb.h>
#include <linux/hdmi.h>
#include <drm/drm_mode.h>
#include <drm/drm_fourcc.h>
struct drm_device;
@ -65,130 +64,14 @@ struct drm_object_properties {
uint64_t values[DRM_OBJECT_MAX_PROPERTY];
};
/*
* Note on terminology: here, for brevity and convenience, we refer to connector
* control chips as 'CRTCs'. They can control any type of connector, VGA, LVDS,
* DVI, etc. And 'screen' refers to the whole of the visible display, which
* may span multiple monitors (and therefore multiple CRTC and connector
* structures).
*/
enum drm_mode_status {
MODE_OK = 0, /* Mode OK */
MODE_HSYNC, /* hsync out of range */
MODE_VSYNC, /* vsync out of range */
MODE_H_ILLEGAL, /* mode has illegal horizontal timings */
MODE_V_ILLEGAL, /* mode has illegal horizontal timings */
MODE_BAD_WIDTH, /* requires an unsupported linepitch */
MODE_NOMODE, /* no mode with a matching name */
MODE_NO_INTERLACE, /* interlaced mode not supported */
MODE_NO_DBLESCAN, /* doublescan mode not supported */
MODE_NO_VSCAN, /* multiscan mode not supported */
MODE_MEM, /* insufficient video memory */
MODE_VIRTUAL_X, /* mode width too large for specified virtual size */
MODE_VIRTUAL_Y, /* mode height too large for specified virtual size */
MODE_MEM_VIRT, /* insufficient video memory given virtual size */
MODE_NOCLOCK, /* no fixed clock available */
MODE_CLOCK_HIGH, /* clock required is too high */
MODE_CLOCK_LOW, /* clock required is too low */
MODE_CLOCK_RANGE, /* clock/mode isn't in a ClockRange */
MODE_BAD_HVALUE, /* horizontal timing was out of range */
MODE_BAD_VVALUE, /* vertical timing was out of range */
MODE_BAD_VSCAN, /* VScan value out of range */
MODE_HSYNC_NARROW, /* horizontal sync too narrow */
MODE_HSYNC_WIDE, /* horizontal sync too wide */
MODE_HBLANK_NARROW, /* horizontal blanking too narrow */
MODE_HBLANK_WIDE, /* horizontal blanking too wide */
MODE_VSYNC_NARROW, /* vertical sync too narrow */
MODE_VSYNC_WIDE, /* vertical sync too wide */
MODE_VBLANK_NARROW, /* vertical blanking too narrow */
MODE_VBLANK_WIDE, /* vertical blanking too wide */
MODE_PANEL, /* exceeds panel dimensions */
MODE_INTERLACE_WIDTH, /* width too large for interlaced mode */
MODE_ONE_WIDTH, /* only one width is supported */
MODE_ONE_HEIGHT, /* only one height is supported */
MODE_ONE_SIZE, /* only one resolution is supported */
MODE_NO_REDUCED, /* monitor doesn't accept reduced blanking */
MODE_NO_STEREO, /* stereo modes not supported */
MODE_UNVERIFIED = -3, /* mode needs to reverified */
MODE_BAD = -2, /* unspecified reason */
MODE_ERROR = -1 /* error condition */
enum drm_connector_force {
DRM_FORCE_UNSPECIFIED,
DRM_FORCE_OFF,
DRM_FORCE_ON, /* force on analog part normally */
DRM_FORCE_ON_DIGITAL, /* for DVI-I use digital connector */
};
#define DRM_MODE_TYPE_CLOCK_CRTC_C (DRM_MODE_TYPE_CLOCK_C | \
DRM_MODE_TYPE_CRTC_C)
#define DRM_MODE(nm, t, c, hd, hss, hse, ht, hsk, vd, vss, vse, vt, vs, f) \
.name = nm, .status = 0, .type = (t), .clock = (c), \
.hdisplay = (hd), .hsync_start = (hss), .hsync_end = (hse), \
.htotal = (ht), .hskew = (hsk), .vdisplay = (vd), \
.vsync_start = (vss), .vsync_end = (vse), .vtotal = (vt), \
.vscan = (vs), .flags = (f), \
.base.type = DRM_MODE_OBJECT_MODE
#define CRTC_INTERLACE_HALVE_V (1 << 0) /* halve V values for interlacing */
#define CRTC_STEREO_DOUBLE (1 << 1) /* adjust timings for stereo modes */
#define DRM_MODE_FLAG_3D_MAX DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF
struct drm_display_mode {
/* Header */
struct list_head head;
struct drm_mode_object base;
char name[DRM_DISPLAY_MODE_LEN];
enum drm_mode_status status;
unsigned int type;
/* Proposed mode values */
int clock; /* in kHz */
int hdisplay;
int hsync_start;
int hsync_end;
int htotal;
int hskew;
int vdisplay;
int vsync_start;
int vsync_end;
int vtotal;
int vscan;
unsigned int flags;
/* Addressable image size (may be 0 for projectors, etc.) */
int width_mm;
int height_mm;
/* Actual mode we give to hw */
int crtc_clock; /* in KHz */
int crtc_hdisplay;
int crtc_hblank_start;
int crtc_hblank_end;
int crtc_hsync_start;
int crtc_hsync_end;
int crtc_htotal;
int crtc_hskew;
int crtc_vdisplay;
int crtc_vblank_start;
int crtc_vblank_end;
int crtc_vsync_start;
int crtc_vsync_end;
int crtc_vtotal;
/* Driver private mode info */
int private_size;
int *private;
int private_flags;
int vrefresh; /* in Hz */
int hsync; /* in kHz */
enum hdmi_picture_aspect picture_aspect_ratio;
};
static inline bool drm_mode_is_stereo(const struct drm_display_mode *mode)
{
return mode->flags & DRM_MODE_FLAG_3D_MASK;
}
#include <drm/drm_modes.h>
enum drm_connector_status {
connector_status_connected = 1,
@ -540,13 +423,6 @@ struct drm_encoder {
void *helper_private;
};
enum drm_connector_force {
DRM_FORCE_UNSPECIFIED,
DRM_FORCE_OFF,
DRM_FORCE_ON, /* force on analog part normally */
DRM_FORCE_ON_DIGITAL, /* for DVI-I use digital connector */
};
/* should we poll this connector for connects and disconnects */
/* hot plug detectable */
#define DRM_CONNECTOR_POLL_HPD (1 << 0)
@ -1007,34 +883,10 @@ extern struct edid *drm_get_edid(struct drm_connector *connector,
struct i2c_adapter *adapter);
extern struct edid *drm_edid_duplicate(const struct edid *edid);
extern int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid);
extern void drm_mode_probed_add(struct drm_connector *connector, struct drm_display_mode *mode);
extern void drm_mode_copy(struct drm_display_mode *dst, const struct drm_display_mode *src);
extern struct drm_display_mode *drm_mode_duplicate(struct drm_device *dev,
const struct drm_display_mode *mode);
extern void drm_mode_debug_printmodeline(const struct drm_display_mode *mode);
extern void drm_mode_config_init(struct drm_device *dev);
extern void drm_mode_config_reset(struct drm_device *dev);
extern void drm_mode_config_cleanup(struct drm_device *dev);
extern void drm_mode_set_name(struct drm_display_mode *mode);
extern bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_display_mode *mode2);
extern bool drm_mode_equal_no_clocks_no_stereo(const struct drm_display_mode *mode1, const struct drm_display_mode *mode2);
extern int drm_mode_width(const struct drm_display_mode *mode);
extern int drm_mode_height(const struct drm_display_mode *mode);
/* for us by fb module */
extern struct drm_display_mode *drm_mode_create(struct drm_device *dev);
extern void drm_mode_destroy(struct drm_device *dev, struct drm_display_mode *mode);
extern void drm_mode_validate_size(struct drm_device *dev,
struct list_head *mode_list,
int maxX, int maxY, int maxPitch);
extern void drm_mode_prune_invalid(struct drm_device *dev,
struct list_head *mode_list, bool verbose);
extern void drm_mode_sort(struct list_head *mode_list);
extern int drm_mode_hsync(const struct drm_display_mode *mode);
extern int drm_mode_vrefresh(const struct drm_display_mode *mode);
extern void drm_mode_set_crtcinfo(struct drm_display_mode *p,
int adjust_flags);
extern void drm_mode_connector_list_update(struct drm_connector *connector);
extern int drm_mode_connector_update_edid_property(struct drm_connector *connector,
struct edid *edid);
extern int drm_object_property_set_value(struct drm_mode_object *obj,
@ -1082,8 +934,6 @@ extern const char *drm_get_encoder_name(const struct drm_encoder *encoder);
extern int drm_mode_connector_attach_encoder(struct drm_connector *connector,
struct drm_encoder *encoder);
extern void drm_mode_connector_detach_encoder(struct drm_connector *connector,
struct drm_encoder *encoder);
extern int drm_mode_crtc_set_gamma_size(struct drm_crtc *crtc,
int gamma_size);
extern struct drm_mode_object *drm_mode_object_find(struct drm_device *dev,
@ -1138,16 +988,6 @@ extern bool drm_detect_monitor_audio(struct edid *edid);
extern bool drm_rgb_quant_range_selectable(struct edid *edid);
extern int drm_mode_page_flip_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv);
extern struct drm_display_mode *drm_cvt_mode(struct drm_device *dev,
int hdisplay, int vdisplay, int vrefresh,
bool reduced, bool interlaced, bool margins);
extern struct drm_display_mode *drm_gtf_mode(struct drm_device *dev,
int hdisplay, int vdisplay, int vrefresh,
bool interlaced, int margins);
extern struct drm_display_mode *drm_gtf_mode_complex(struct drm_device *dev,
int hdisplay, int vdisplay, int vrefresh,
bool interlaced, int margins, int GTF_M,
int GTF_2C, int GTF_K, int GTF_2J);
extern int drm_add_modes_noedid(struct drm_connector *connector,
int hdisplay, int vdisplay);
extern void drm_set_preferred_mode(struct drm_connector *connector,

View File

@ -139,8 +139,8 @@ extern void drm_helper_connector_dpms(struct drm_connector *connector, int mode)
extern void drm_helper_move_panel_connectors_to_head(struct drm_device *);
extern int drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
struct drm_mode_fb_cmd2 *mode_cmd);
extern void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
struct drm_mode_fb_cmd2 *mode_cmd);
static inline void drm_crtc_helper_add(struct drm_crtc *crtc,
const struct drm_crtc_helper_funcs *funcs)
@ -160,7 +160,7 @@ static inline void drm_connector_helper_add(struct drm_connector *connector,
connector->helper_private = (void *)funcs;
}
extern int drm_helper_resume_force_mode(struct drm_device *dev);
extern void drm_helper_resume_force_mode(struct drm_device *dev);
extern void drm_kms_helper_poll_init(struct drm_device *dev);
extern void drm_kms_helper_poll_fini(struct drm_device *dev);
extern bool drm_helper_hpd_irq_event(struct drm_device *dev);

View File

@ -85,11 +85,31 @@ struct drm_mm {
unsigned long *start, unsigned long *end);
};
/**
* drm_mm_node_allocated - checks whether a node is allocated
* @node: drm_mm_node to check
*
* Drivers should use this helpers for proper encapusulation of drm_mm
* internals.
*
* Returns:
* True if the @node is allocated.
*/
static inline bool drm_mm_node_allocated(struct drm_mm_node *node)
{
return node->allocated;
}
/**
* drm_mm_initialized - checks whether an allocator is initialized
* @mm: drm_mm to check
*
* Drivers should use this helpers for proper encapusulation of drm_mm
* internals.
*
* Returns:
* True if the @mm is initialized.
*/
static inline bool drm_mm_initialized(struct drm_mm *mm)
{
return mm->hole_stack.next;
@ -100,6 +120,17 @@ static inline unsigned long __drm_mm_hole_node_start(struct drm_mm_node *hole_no
return hole_node->start + hole_node->size;
}
/**
* drm_mm_hole_node_start - computes the start of the hole following @node
* @hole_node: drm_mm_node which implicitly tracks the following hole
*
* This is useful for driver-sepific debug dumpers. Otherwise drivers should not
* inspect holes themselves. Drivers must check first whether a hole indeed
* follows by looking at node->hole_follows.
*
* Returns:
* Start of the subsequent hole.
*/
static inline unsigned long drm_mm_hole_node_start(struct drm_mm_node *hole_node)
{
BUG_ON(!hole_node->hole_follows);
@ -112,18 +143,49 @@ static inline unsigned long __drm_mm_hole_node_end(struct drm_mm_node *hole_node
struct drm_mm_node, node_list)->start;
}
/**
* drm_mm_hole_node_end - computes the end of the hole following @node
* @hole_node: drm_mm_node which implicitly tracks the following hole
*
* This is useful for driver-sepific debug dumpers. Otherwise drivers should not
* inspect holes themselves. Drivers must check first whether a hole indeed
* follows by looking at node->hole_follows.
*
* Returns:
* End of the subsequent hole.
*/
static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node)
{
return __drm_mm_hole_node_end(hole_node);
}
/**
* drm_mm_for_each_node - iterator to walk over all allocated nodes
* @entry: drm_mm_node structure to assign to in each iteration step
* @mm: drm_mm allocator to walk
*
* This iterator walks over all nodes in the range allocator. It is implemented
* with list_for_each, so not save against removal of elements.
*/
#define drm_mm_for_each_node(entry, mm) list_for_each_entry(entry, \
&(mm)->head_node.node_list, \
node_list)
/* Note that we need to unroll list_for_each_entry in order to inline
* setting hole_start and hole_end on each iteration and keep the
* macro sane.
/**
* drm_mm_for_each_hole - iterator to walk over all holes
* @entry: drm_mm_node used internally to track progress
* @mm: drm_mm allocator to walk
* @hole_start: ulong variable to assign the hole start to on each iteration
* @hole_end: ulong variable to assign the hole end to on each iteration
*
* This iterator walks over all holes in the range allocator. It is implemented
* with list_for_each, so not save against removal of elements. @entry is used
* internally and will not reflect a real drm_mm_node for the very first hole.
* Hence users of this iterator may not access it.
*
* Implementation Note:
* We need to inline list_for_each_entry in order to be able to set hole_start
* and hole_end on each iteration while keeping the macro sane.
*/
#define drm_mm_for_each_hole(entry, mm, hole_start, hole_end) \
for (entry = list_entry((mm)->hole_stack.next, struct drm_mm_node, hole_stack); \
@ -136,14 +198,30 @@ static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node)
/*
* Basic range manager support (drm_mm.c)
*/
extern int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node);
int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node);
extern int drm_mm_insert_node_generic(struct drm_mm *mm,
struct drm_mm_node *node,
unsigned long size,
unsigned alignment,
unsigned long color,
enum drm_mm_search_flags flags);
int drm_mm_insert_node_generic(struct drm_mm *mm,
struct drm_mm_node *node,
unsigned long size,
unsigned alignment,
unsigned long color,
enum drm_mm_search_flags flags);
/**
* drm_mm_insert_node - search for space and insert @node
* @mm: drm_mm to allocate from
* @node: preallocate node to insert
* @size: size of the allocation
* @alignment: alignment of the allocation
* @flags: flags to fine-tune the allocation
*
* This is a simplified version of drm_mm_insert_node_generic() with @color set
* to 0.
*
* The preallocated node must be cleared to 0.
*
* Returns:
* 0 on success, -ENOSPC if there's no suitable hole.
*/
static inline int drm_mm_insert_node(struct drm_mm *mm,
struct drm_mm_node *node,
unsigned long size,
@ -153,14 +231,32 @@ static inline int drm_mm_insert_node(struct drm_mm *mm,
return drm_mm_insert_node_generic(mm, node, size, alignment, 0, flags);
}
extern int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
struct drm_mm_node *node,
unsigned long size,
unsigned alignment,
unsigned long color,
unsigned long start,
unsigned long end,
enum drm_mm_search_flags flags);
int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,
struct drm_mm_node *node,
unsigned long size,
unsigned alignment,
unsigned long color,
unsigned long start,
unsigned long end,
enum drm_mm_search_flags flags);
/**
* drm_mm_insert_node_in_range - ranged search for space and insert @node
* @mm: drm_mm to allocate from
* @node: preallocate node to insert
* @size: size of the allocation
* @alignment: alignment of the allocation
* @start: start of the allowed range for this node
* @end: end of the allowed range for this node
* @flags: flags to fine-tune the allocation
*
* This is a simplified version of drm_mm_insert_node_in_range_generic() with
* @color set to 0.
*
* The preallocated node must be cleared to 0.
*
* Returns:
* 0 on success, -ENOSPC if there's no suitable hole.
*/
static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
struct drm_mm_node *node,
unsigned long size,
@ -173,13 +269,13 @@ static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
0, start, end, flags);
}
extern void drm_mm_remove_node(struct drm_mm_node *node);
extern void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new);
extern void drm_mm_init(struct drm_mm *mm,
unsigned long start,
unsigned long size);
extern void drm_mm_takedown(struct drm_mm *mm);
extern int drm_mm_clean(struct drm_mm *mm);
void drm_mm_remove_node(struct drm_mm_node *node);
void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new);
void drm_mm_init(struct drm_mm *mm,
unsigned long start,
unsigned long size);
void drm_mm_takedown(struct drm_mm *mm);
bool drm_mm_clean(struct drm_mm *mm);
void drm_mm_init_scan(struct drm_mm *mm,
unsigned long size,
@ -191,10 +287,10 @@ void drm_mm_init_scan_with_range(struct drm_mm *mm,
unsigned long color,
unsigned long start,
unsigned long end);
int drm_mm_scan_add_block(struct drm_mm_node *node);
int drm_mm_scan_remove_block(struct drm_mm_node *node);
bool drm_mm_scan_add_block(struct drm_mm_node *node);
bool drm_mm_scan_remove_block(struct drm_mm_node *node);
extern void drm_mm_debug_table(struct drm_mm *mm, const char *prefix);
void drm_mm_debug_table(struct drm_mm *mm, const char *prefix);
#ifdef CONFIG_DEBUG_FS
int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm);
#endif

237
include/drm/drm_modes.h Normal file
View File

@ -0,0 +1,237 @@
/*
* Copyright © 2006 Keith Packard
* Copyright © 2007-2008 Dave Airlie
* Copyright © 2007-2008 Intel Corporation
* Jesse Barnes <jesse.barnes@intel.com>
* Copyright © 2014 Intel Corporation
* Daniel Vetter <daniel.vetter@ffwll.ch>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef __DRM_MODES_H__
#define __DRM_MODES_H__
/*
* Note on terminology: here, for brevity and convenience, we refer to connector
* control chips as 'CRTCs'. They can control any type of connector, VGA, LVDS,
* DVI, etc. And 'screen' refers to the whole of the visible display, which
* may span multiple monitors (and therefore multiple CRTC and connector
* structures).
*/
enum drm_mode_status {
MODE_OK = 0, /* Mode OK */
MODE_HSYNC, /* hsync out of range */
MODE_VSYNC, /* vsync out of range */
MODE_H_ILLEGAL, /* mode has illegal horizontal timings */
MODE_V_ILLEGAL, /* mode has illegal horizontal timings */
MODE_BAD_WIDTH, /* requires an unsupported linepitch */
MODE_NOMODE, /* no mode with a matching name */
MODE_NO_INTERLACE, /* interlaced mode not supported */
MODE_NO_DBLESCAN, /* doublescan mode not supported */
MODE_NO_VSCAN, /* multiscan mode not supported */
MODE_MEM, /* insufficient video memory */
MODE_VIRTUAL_X, /* mode width too large for specified virtual size */
MODE_VIRTUAL_Y, /* mode height too large for specified virtual size */
MODE_MEM_VIRT, /* insufficient video memory given virtual size */
MODE_NOCLOCK, /* no fixed clock available */
MODE_CLOCK_HIGH, /* clock required is too high */
MODE_CLOCK_LOW, /* clock required is too low */
MODE_CLOCK_RANGE, /* clock/mode isn't in a ClockRange */
MODE_BAD_HVALUE, /* horizontal timing was out of range */
MODE_BAD_VVALUE, /* vertical timing was out of range */
MODE_BAD_VSCAN, /* VScan value out of range */
MODE_HSYNC_NARROW, /* horizontal sync too narrow */
MODE_HSYNC_WIDE, /* horizontal sync too wide */
MODE_HBLANK_NARROW, /* horizontal blanking too narrow */
MODE_HBLANK_WIDE, /* horizontal blanking too wide */
MODE_VSYNC_NARROW, /* vertical sync too narrow */
MODE_VSYNC_WIDE, /* vertical sync too wide */
MODE_VBLANK_NARROW, /* vertical blanking too narrow */
MODE_VBLANK_WIDE, /* vertical blanking too wide */
MODE_PANEL, /* exceeds panel dimensions */
MODE_INTERLACE_WIDTH, /* width too large for interlaced mode */
MODE_ONE_WIDTH, /* only one width is supported */
MODE_ONE_HEIGHT, /* only one height is supported */
MODE_ONE_SIZE, /* only one resolution is supported */
MODE_NO_REDUCED, /* monitor doesn't accept reduced blanking */
MODE_NO_STEREO, /* stereo modes not supported */
MODE_UNVERIFIED = -3, /* mode needs to reverified */
MODE_BAD = -2, /* unspecified reason */
MODE_ERROR = -1 /* error condition */
};
#define DRM_MODE_TYPE_CLOCK_CRTC_C (DRM_MODE_TYPE_CLOCK_C | \
DRM_MODE_TYPE_CRTC_C)
#define DRM_MODE(nm, t, c, hd, hss, hse, ht, hsk, vd, vss, vse, vt, vs, f) \
.name = nm, .status = 0, .type = (t), .clock = (c), \
.hdisplay = (hd), .hsync_start = (hss), .hsync_end = (hse), \
.htotal = (ht), .hskew = (hsk), .vdisplay = (vd), \
.vsync_start = (vss), .vsync_end = (vse), .vtotal = (vt), \
.vscan = (vs), .flags = (f), \
.base.type = DRM_MODE_OBJECT_MODE
#define CRTC_INTERLACE_HALVE_V (1 << 0) /* halve V values for interlacing */
#define CRTC_STEREO_DOUBLE (1 << 1) /* adjust timings for stereo modes */
#define DRM_MODE_FLAG_3D_MAX DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF
struct drm_display_mode {
/* Header */
struct list_head head;
struct drm_mode_object base;
char name[DRM_DISPLAY_MODE_LEN];
enum drm_mode_status status;
unsigned int type;
/* Proposed mode values */
int clock; /* in kHz */
int hdisplay;
int hsync_start;
int hsync_end;
int htotal;
int hskew;
int vdisplay;
int vsync_start;
int vsync_end;
int vtotal;
int vscan;
unsigned int flags;
/* Addressable image size (may be 0 for projectors, etc.) */
int width_mm;
int height_mm;
/* Actual mode we give to hw */
int crtc_clock; /* in KHz */
int crtc_hdisplay;
int crtc_hblank_start;
int crtc_hblank_end;
int crtc_hsync_start;
int crtc_hsync_end;
int crtc_htotal;
int crtc_hskew;
int crtc_vdisplay;
int crtc_vblank_start;
int crtc_vblank_end;
int crtc_vsync_start;
int crtc_vsync_end;
int crtc_vtotal;
/* Driver private mode info */
int *private;
int private_flags;
int vrefresh; /* in Hz */
int hsync; /* in kHz */
enum hdmi_picture_aspect picture_aspect_ratio;
};
/* mode specified on the command line */
struct drm_cmdline_mode {
bool specified;
bool refresh_specified;
bool bpp_specified;
int xres, yres;
int bpp;
int refresh;
bool rb;
bool interlace;
bool cvt;
bool margins;
enum drm_connector_force force;
};
/**
* drm_mode_is_stereo - check for stereo mode flags
* @mode: drm_display_mode to check
*
* Returns:
* True if the mode is one of the stereo modes (like side-by-side), false if
* not.
*/
static inline bool drm_mode_is_stereo(const struct drm_display_mode *mode)
{
return mode->flags & DRM_MODE_FLAG_3D_MASK;
}
struct drm_connector;
struct drm_cmdline_mode;
struct drm_display_mode *drm_mode_create(struct drm_device *dev);
void drm_mode_destroy(struct drm_device *dev, struct drm_display_mode *mode);
void drm_mode_probed_add(struct drm_connector *connector, struct drm_display_mode *mode);
void drm_mode_debug_printmodeline(const struct drm_display_mode *mode);
struct drm_display_mode *drm_cvt_mode(struct drm_device *dev,
int hdisplay, int vdisplay, int vrefresh,
bool reduced, bool interlaced,
bool margins);
struct drm_display_mode *drm_gtf_mode(struct drm_device *dev,
int hdisplay, int vdisplay, int vrefresh,
bool interlaced, int margins);
struct drm_display_mode *drm_gtf_mode_complex(struct drm_device *dev,
int hdisplay, int vdisplay,
int vrefresh, bool interlaced,
int margins,
int GTF_M, int GTF_2C,
int GTF_K, int GTF_2J);
void drm_display_mode_from_videomode(const struct videomode *vm,
struct drm_display_mode *dmode);
int of_get_drm_display_mode(struct device_node *np,
struct drm_display_mode *dmode,
int index);
void drm_mode_set_name(struct drm_display_mode *mode);
int drm_mode_hsync(const struct drm_display_mode *mode);
int drm_mode_vrefresh(const struct drm_display_mode *mode);
void drm_mode_set_crtcinfo(struct drm_display_mode *p,
int adjust_flags);
void drm_mode_copy(struct drm_display_mode *dst,
const struct drm_display_mode *src);
struct drm_display_mode *drm_mode_duplicate(struct drm_device *dev,
const struct drm_display_mode *mode);
bool drm_mode_equal(const struct drm_display_mode *mode1,
const struct drm_display_mode *mode2);
bool drm_mode_equal_no_clocks_no_stereo(const struct drm_display_mode *mode1,
const struct drm_display_mode *mode2);
/* for use by the crtc helper probe functions */
void drm_mode_validate_size(struct drm_device *dev,
struct list_head *mode_list,
int maxX, int maxY);
void drm_mode_prune_invalid(struct drm_device *dev,
struct list_head *mode_list, bool verbose);
void drm_mode_sort(struct list_head *mode_list);
void drm_mode_connector_list_update(struct drm_connector *connector);
/* parsing cmdline modes */
bool
drm_mode_parse_command_line_for_connector(const char *mode_option,
struct drm_connector *connector,
struct drm_cmdline_mode *mode);
struct drm_display_mode *
drm_mode_create_from_cmdline_mode(struct drm_device *dev,
struct drm_cmdline_mode *cmd);
#endif /* __DRM_MODES_H__ */

View File

@ -262,6 +262,18 @@ union hdmi_vendor_any_infoframe {
struct hdmi_vendor_infoframe hdmi;
};
/**
* union hdmi_infoframe - overall union of all abstract infoframe representations
* @any: generic infoframe
* @avi: avi infoframe
* @spd: spd infoframe
* @vendor: union of all vendor infoframes
* @audio: audio infoframe
*
* This is used by the generic pack function. This works since all infoframes
* have the same header which also indicates which type of infoframe should be
* packed.
*/
union hdmi_infoframe {
struct hdmi_any_infoframe any;
struct hdmi_avi_infoframe avi;