drm fixes for 5.19-rc4

amdgpu:
 - Adjust GTT size logic
 - eDP fix for RMB
 - DCN 3.15 fix
 - DP training fix
 - Color encoding fix for DCN2+
 
 sun4i:
 - multiple suspend fixes
 
 vc4:
 - rework driver split for rpi4,
   fixes mulitple crashers.
 
 panel:
 - quirk for Aya Neo Next
 
 i915:
 - Revert low voltage SKU check removal to fix display issues
 - Apply PLL DCO fraction workaround for ADL-S
 - Don't show engine classes not present in client fdinfo
 
 msm:
 - Workaround for parade DSI bridge power sequencing
 - Fix for multi-planar YUV format offsets
 - Limiting WB modes to max sspp linewidth
 - Fixing the supported rotations to add 180 back for IGT
 - Fix to handle pm_runtime_get_sync() errors to avoid unclocked access
   in the bind() path for dpu driver
 - Fix the irq_free() without request issue which was a being hit frequently
   in CI.
 - Fix to add minimum ICC vote in the msm_mdss pm_resume path to address
   bootup splats
 - Fix to avoid dereferencing without checking in WB encoder
 - Fix to avoid crash during suspend in DP driver by ensuring interrupt
   mask bits are updated
 - Remove unused code from dpu_encoder_virt_atomic_check()
 - Fix to remove redundant init of dsc variable
 - Fix to ensure mmap offset is initialized to avoid memory corruption
   from unpin/evict
 - Fix double runpm disable in probe-defer path
 - VMA fenced-unpin fixes
 - Fix for WB max-width
 - Fix for rare dp resolution change issue
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmK1S1EACgkQDHTzWXnE
 hr76nxAArO4n/Hqjuvp50KDuOSbGqSU0z97j9ImzAij6mIoBv2HrQ3VDitdON8ym
 8QI7Xc3mgiABs6GbBfIVm8cxEY0CKjH/l0IqZ6z4CT06T1lhgc5KwvceoE/Qap6y
 ZrleAjk34ZS1kagMUklm6DTkKRnNIkYLt/CLX7VVV2Afpiduanr0ILwEPLqQrbXP
 zcB3nblu34/5Mywa2dPFChspqlv4g+RyUML2M8M7g4lOC+dDaQBbr69g2m17fMar
 vmXyCsHyo5LSGLKen8k+oIQWXYzpCbH9n/6s+OLe/HMX52lkudEtIcA7jcxwNY46
 tajy3yfrP/c3FOVF2mTGwIs/zx0R1XiEjUgcDGAJwo2bY8OMJqToLRPj9M0Gyoub
 UixoYfqGVwxpInVHmRVx7zWF5c0/RpRY1yLcOuPSmG+qXx9PHny+I2eYgsNzkT0M
 UmM6pt5Vlj2AjjU0NKfNbZ7MM6lSE82lvtdV3h7sR9I3LCLZhPyXqNudWZtKFtSF
 qIJL4yq4mjPZ5F9/fG8ttz7tnJ9yNe2Gq3AbldKlbIZZ2EpOGDjaDV75zqA5lxyY
 bmz8U5+QbK0/LTsLjoQtX8Ch9tVwyZjGtUZ2zxuiId0OKla4GSfTKFWE6cZAAtYg
 rtkn6Q3WAVyAElg3E2JojVjIGalpIsXamfKWmc4wCUIVy2WbLgk=
 =yG/r
 -----END PGP SIGNATURE-----

Merge tag 'drm-fixes-2022-06-24' of git://anongit.freedesktop.org/drm/drm

Pull drm fixes from Dave Airlie:
 "Fixes for this week, bit larger than normal, but I think the last
  couple have been quieter, and it's only rc4.

  There are a lot of small msm fixes, and a slightly larger set of vc4
  fixes. The vc4 fixes clean up a lot of crashes around the rPI4
  hardware differences from earlier ones, and problems in the page flip
  and modeset code which assumed earlier hw, so I thought it would be
  okay to keep them in.

  Otherwise, it's a few amdgpu, i915, sun4i and a panel quirk.

  amdgpu:
   - Adjust GTT size logic
   - eDP fix for RMB
   - DCN 3.15 fix
   - DP training fix
   - Color encoding fix for DCN2+

  sun4i:
   - multiple suspend fixes

  vc4:
   - rework driver split for rpi4, fixes mulitple crashers.

  panel:
   - quirk for Aya Neo Next

  i915:
   - Revert low voltage SKU check removal to fix display issues
   - Apply PLL DCO fraction workaround for ADL-S
   - Don't show engine classes not present in client fdinfo

  msm:
   - Workaround for parade DSI bridge power sequencing
   - Fix for multi-planar YUV format offsets
   - Limiting WB modes to max sspp linewidth
   - Fixing the supported rotations to add 180 back for IGT
   - Fix to handle pm_runtime_get_sync() errors to avoid unclocked
     access in the bind() path for dpu driver
   - Fix the irq_free() without request issue which was a being hit
     frequently in CI.
   - Fix to add minimum ICC vote in the msm_mdss pm_resume path to
     address bootup splats
   - Fix to avoid dereferencing without checking in WB encoder
   - Fix to avoid crash during suspend in DP driver by ensuring
     interrupt mask bits are updated
   - Remove unused code from dpu_encoder_virt_atomic_check()
   - Fix to remove redundant init of dsc variable
   - Fix to ensure mmap offset is initialized to avoid memory corruption
     from unpin/evict
   - Fix double runpm disable in probe-defer path
   - VMA fenced-unpin fixes
   - Fix for WB max-width
   - Fix for rare dp resolution change issue"

* tag 'drm-fixes-2022-06-24' of git://anongit.freedesktop.org/drm/drm: (41 commits)
  amd/display/dc: Fix COLOR_ENCODING and COLOR_RANGE doing nothing for DCN20+
  drm/amd/display: Fix typo in override_lane_settings
  drm/amd/display: Fix DC warning at driver load
  drm/amd: Revert "drm/amd/display: keep eDP Vdd on when eDP stream is already enabled"
  drm/amdgpu: Adjust logic around GTT size (v3)
  drm/sun4i: Return if frontend is not present
  drm/vc4: fix error code in vc4_check_tex_size()
  drm/sun4i: Add DMA mask and segment size
  drm/vc4: hdmi: Fixed possible integer overflow
  drm/i915/display: Re-add check for low voltage sku for max dp source rate
  drm/i915/fdinfo: Don't show engine classes not present
  drm/i915: Implement w/a 22010492432 for adl-s
  drm: panel-orientation-quirks: Add quirk for Aya Neo Next
  drm/msm/dp: force link training for display resolution change
  drm/msm/dpu: limit wb modes based on max_mixer_width
  drm/msm/dp: check core_initialized before disable interrupts at dp_display_unbind()
  drm/msm/mdp4: Fix refcount leak in mdp4_modeset_init_intf
  drm/msm: Don't overwrite hw fence in hw_init
  drm/msm: Drop update_fences()
  drm/vc4: Warn if some v3d code is run on BCM2711
  ...
This commit is contained in:
Linus Torvalds 2022-06-24 11:43:49 -07:00
commit 38bc4ac431
47 changed files with 672 additions and 270 deletions

View File

@ -1798,18 +1798,26 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
DRM_INFO("amdgpu: %uM of VRAM memory ready\n",
(unsigned) (adev->gmc.real_vram_size / (1024 * 1024)));
/* Compute GTT size, either bsaed on 3/4th the size of RAM size
/* Compute GTT size, either based on 1/2 the size of RAM size
* or whatever the user passed on module init */
if (amdgpu_gtt_size == -1) {
struct sysinfo si;
si_meminfo(&si);
gtt_size = min(max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),
adev->gmc.mc_vram_size),
((uint64_t)si.totalram * si.mem_unit * 3/4));
}
else
/* Certain GL unit tests for large textures can cause problems
* with the OOM killer since there is no way to link this memory
* to a process. This was originally mitigated (but not necessarily
* eliminated) by limiting the GTT size. The problem is this limit
* is often too low for many modern games so just make the limit 1/2
* of system memory which aligns with TTM. The OOM accounting needs
* to be addressed, but we shouldn't prevent common 3D applications
* from being usable just to potentially mitigate that corner case.
*/
gtt_size = max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),
(u64)si.totalram * si.mem_unit / 2);
} else {
gtt_size = (uint64_t)amdgpu_gtt_size << 20;
}
/* Initialize GTT memory pool */
r = amdgpu_gtt_mgr_init(adev, gtt_size);

View File

@ -550,7 +550,7 @@ static void dcn315_clk_mgr_helper_populate_bw_params(
if (!bw_params->clk_table.entries[i].dtbclk_mhz)
bw_params->clk_table.entries[i].dtbclk_mhz = def_max.dtbclk_mhz;
}
ASSERT(bw_params->clk_table.entries[i].dcfclk_mhz);
ASSERT(bw_params->clk_table.entries[i-1].dcfclk_mhz);
bw_params->vram_type = bios_info->memory_type;
bw_params->num_channels = bios_info->ma_channel_number;
if (!bw_params->num_channels)

View File

@ -944,7 +944,7 @@ static void override_lane_settings(const struct link_training_settings *lt_setti
return;
for (lane = 1; lane < LANE_COUNT_DP_MAX; lane++) {
for (lane = 0; lane < LANE_COUNT_DP_MAX; lane++) {
if (lt_settings->voltage_swing)
lane_settings[lane].VOLTAGE_SWING = *lt_settings->voltage_swing;
if (lt_settings->pre_emphasis)

View File

@ -1766,29 +1766,9 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
break;
}
}
/*
* TO-DO: So far the code logic below only addresses single eDP case.
* For dual eDP case, there are a few things that need to be
* implemented first:
*
* 1. Change the fastboot logic above, so eDP link[0 or 1]'s
* stream[0 or 1] will all be checked.
*
* 2. Change keep_edp_vdd_on to an array, and maintain keep_edp_vdd_on
* for each eDP.
*
* Once above 2 things are completed, we can then change the logic below
* correspondingly, so dual eDP case will be fully covered.
*/
// We are trying to enable eDP, don't power down VDD if eDP stream is existing
if ((edp_stream_num == 1 && edp_streams[0] != NULL) || can_apply_edp_fast_boot) {
// We are trying to enable eDP, don't power down VDD
if (can_apply_edp_fast_boot)
keep_edp_vdd_on = true;
DC_LOG_EVENT_LINK_TRAINING("Keep eDP Vdd on\n");
} else {
DC_LOG_EVENT_LINK_TRAINING("No eDP stream enabled, turn eDP Vdd off\n");
}
}
// Check seamless boot support

View File

@ -212,6 +212,9 @@ static void dpp2_cnv_setup (
break;
}
/* Set default color space based on format if none is given. */
color_space = input_color_space ? input_color_space : color_space;
if (is_2bit == 1 && alpha_2bit_lut != NULL) {
REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);
REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);

View File

@ -153,6 +153,9 @@ static void dpp201_cnv_setup(
break;
}
/* Set default color space based on format if none is given. */
color_space = input_color_space ? input_color_space : color_space;
if (is_2bit == 1 && alpha_2bit_lut != NULL) {
REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);
REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);

View File

@ -294,6 +294,9 @@ static void dpp3_cnv_setup (
break;
}
/* Set default color space based on format if none is given. */
color_space = input_color_space ? input_color_space : color_space;
if (is_2bit == 1 && alpha_2bit_lut != NULL) {
REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);
REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);

View File

@ -152,6 +152,12 @@ static const struct dmi_system_id orientation_data[] = {
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYA NEO 2021"),
},
.driver_data = (void *)&lcd800x1280_rightside_up,
}, { /* AYA NEO NEXT */
.matches = {
DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"),
DMI_MATCH(DMI_BOARD_NAME, "NEXT"),
},
.driver_data = (void *)&lcd800x1280_rightside_up,
}, { /* Chuwi HiBook (CWI514) */
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),

View File

@ -388,13 +388,23 @@ static int dg2_max_source_rate(struct intel_dp *intel_dp)
return intel_dp_is_edp(intel_dp) ? 810000 : 1350000;
}
static bool is_low_voltage_sku(struct drm_i915_private *i915, enum phy phy)
{
u32 voltage;
voltage = intel_de_read(i915, ICL_PORT_COMP_DW3(phy)) & VOLTAGE_INFO_MASK;
return voltage == VOLTAGE_INFO_0_85V;
}
static int icl_max_source_rate(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
if (intel_phy_is_combo(dev_priv, phy) && !intel_dp_is_edp(intel_dp))
if (intel_phy_is_combo(dev_priv, phy) &&
(is_low_voltage_sku(dev_priv, phy) || !intel_dp_is_edp(intel_dp)))
return 540000;
return 810000;
@ -402,7 +412,23 @@ static int icl_max_source_rate(struct intel_dp *intel_dp)
static int ehl_max_source_rate(struct intel_dp *intel_dp)
{
if (intel_dp_is_edp(intel_dp))
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
if (intel_dp_is_edp(intel_dp) || is_low_voltage_sku(dev_priv, phy))
return 540000;
return 810000;
}
static int dg1_max_source_rate(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
enum phy phy = intel_port_to_phy(i915, dig_port->base.port);
if (intel_phy_is_combo(i915, phy) && is_low_voltage_sku(i915, phy))
return 540000;
return 810000;
@ -445,7 +471,7 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp)
max_rate = dg2_max_source_rate(intel_dp);
else if (IS_ALDERLAKE_P(dev_priv) || IS_ALDERLAKE_S(dev_priv) ||
IS_DG1(dev_priv) || IS_ROCKETLAKE(dev_priv))
max_rate = 810000;
max_rate = dg1_max_source_rate(intel_dp);
else if (IS_JSL_EHL(dev_priv))
max_rate = ehl_max_source_rate(intel_dp);
else

View File

@ -2396,7 +2396,7 @@ static void icl_wrpll_params_populate(struct skl_wrpll_params *params,
}
/*
* Display WA #22010492432: ehl, tgl, adl-p
* Display WA #22010492432: ehl, tgl, adl-s, adl-p
* Program half of the nominal DCO divider fraction value.
*/
static bool
@ -2404,7 +2404,7 @@ ehl_combo_pll_div_frac_wa_needed(struct drm_i915_private *i915)
{
return ((IS_PLATFORM(i915, INTEL_ELKHARTLAKE) &&
IS_JSL_EHL_DISPLAY_STEP(i915, STEP_B0, STEP_FOREVER)) ||
IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) &&
IS_TIGERLAKE(i915) || IS_ALDERLAKE_S(i915) || IS_ALDERLAKE_P(i915)) &&
i915->dpll.ref_clks.nssc == 38400;
}

View File

@ -116,6 +116,7 @@ show_client_class(struct seq_file *m,
total += busy_add(ctx, class);
rcu_read_unlock();
if (capacity)
seq_printf(m, "drm-engine-%s:\t%llu ns\n",
uabi_class_names[class], total);

View File

@ -498,10 +498,15 @@ int adreno_hw_init(struct msm_gpu *gpu)
ring->cur = ring->start;
ring->next = ring->start;
/* reset completed fence seqno: */
ring->memptrs->fence = ring->fctx->completed_fence;
ring->memptrs->rptr = 0;
/* Detect and clean up an impossible fence, ie. if GPU managed
* to scribble something invalid, we don't want that to confuse
* us into mistakingly believing that submits have completed.
*/
if (fence_before(ring->fctx->last_fence, ring->memptrs->fence)) {
ring->memptrs->fence = ring->fctx->last_fence;
}
}
return 0;
@ -1057,6 +1062,7 @@ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
release_firmware(adreno_gpu->fw[i]);
if (pm_runtime_enabled(&priv->gpu_pdev->dev))
pm_runtime_disable(&priv->gpu_pdev->dev);
msm_gpu_cleanup(&adreno_gpu->base);

View File

@ -11,7 +11,14 @@ static int dpu_wb_conn_get_modes(struct drm_connector *connector)
struct msm_drm_private *priv = dev->dev_private;
struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms);
return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_linewidth,
/*
* We should ideally be limiting the modes only to the maxlinewidth but
* on some chipsets this will allow even 4k modes to be added which will
* fail the per SSPP bandwidth checks. So, till we have dual-SSPP support
* and source split support added lets limit the modes based on max_mixer_width
* as 4K modes can then be supported.
*/
return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_mixer_width,
dev->mode_config.max_height);
}

View File

@ -216,6 +216,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
encoder = mdp4_lcdc_encoder_init(dev, panel_node);
if (IS_ERR(encoder)) {
DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n");
of_node_put(panel_node);
return PTR_ERR(encoder);
}
@ -225,6 +226,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms,
connector = mdp4_lvds_connector_init(dev, panel_node, encoder);
if (IS_ERR(connector)) {
DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n");
of_node_put(panel_node);
return PTR_ERR(connector);
}

View File

@ -1534,6 +1534,8 @@ end:
return ret;
}
static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl);
static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
{
int ret = 0;
@ -1557,7 +1559,7 @@ static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
ret = dp_ctrl_on_link(&ctrl->dp_ctrl);
if (!ret)
ret = dp_ctrl_on_stream(&ctrl->dp_ctrl);
ret = dp_ctrl_on_stream_phy_test_report(&ctrl->dp_ctrl);
else
DRM_ERROR("failed to enable DP link controller\n");
@ -1813,7 +1815,27 @@ static int dp_ctrl_link_retrain(struct dp_ctrl_private *ctrl)
return dp_ctrl_setup_main_link(ctrl, &training_step);
}
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl)
{
int ret;
struct dp_ctrl_private *ctrl;
ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
ret = dp_ctrl_enable_stream_clocks(ctrl);
if (ret) {
DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);
return ret;
}
dp_ctrl_send_phy_test_pattern(ctrl);
return 0;
}
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train)
{
int ret = 0;
bool mainlink_ready = false;
@ -1849,12 +1871,7 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
goto end;
}
if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN) {
dp_ctrl_send_phy_test_pattern(ctrl);
return 0;
}
if (!dp_ctrl_channel_eq_ok(ctrl))
if (force_link_train || !dp_ctrl_channel_eq_ok(ctrl))
dp_ctrl_link_retrain(ctrl);
/* stop txing train pattern to end link training */

View File

@ -21,7 +21,7 @@ struct dp_ctrl {
};
int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl);
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl);
int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train);
int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl);
int dp_ctrl_off_link(struct dp_ctrl *dp_ctrl);
int dp_ctrl_off(struct dp_ctrl *dp_ctrl);

View File

@ -309,6 +309,7 @@ static void dp_display_unbind(struct device *dev, struct device *master,
struct msm_drm_private *priv = dev_get_drvdata(master);
/* disable all HPD interrupts */
if (dp->core_initialized)
dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
kthread_stop(dp->ev_tsk);
@ -872,7 +873,7 @@ static int dp_display_enable(struct dp_display_private *dp, u32 data)
return 0;
}
rc = dp_ctrl_on_stream(dp->ctrl);
rc = dp_ctrl_on_stream(dp->ctrl, data);
if (!rc)
dp_display->power_on = true;
@ -1659,6 +1660,7 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge)
int rc = 0;
struct dp_display_private *dp_display;
u32 state;
bool force_link_train = false;
dp_display = container_of(dp, struct dp_display_private, dp_display);
if (!dp_display->dp_mode.drm_mode.clock) {
@ -1693,10 +1695,12 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge)
state = dp_display->hpd_state;
if (state == ST_DISPLAY_OFF)
if (state == ST_DISPLAY_OFF) {
dp_display_host_phy_init(dp_display);
force_link_train = true;
}
dp_display_enable(dp_display, 0);
dp_display_enable(dp_display, force_link_train);
rc = dp_display_post_enable(dp);
if (rc) {
@ -1705,10 +1709,6 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge)
dp_display_unprepare(dp);
}
/* manual kick off plug event to train link */
if (state == ST_DISPLAY_OFF)
dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0);
/* completed connection */
dp_display->hpd_state = ST_CONNECTED;

View File

@ -964,7 +964,7 @@ static const struct drm_driver msm_driver = {
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_import_sg_table = msm_gem_prime_import_sg_table,
.gem_prime_mmap = drm_gem_prime_mmap,
.gem_prime_mmap = msm_gem_prime_mmap,
#ifdef CONFIG_DEBUG_FS
.debugfs_init = msm_debugfs_init,
#endif

View File

@ -246,6 +246,7 @@ unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_t
void msm_gem_shrinker_init(struct drm_device *dev);
void msm_gem_shrinker_cleanup(struct drm_device *dev);
int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj);
int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);

View File

@ -46,12 +46,14 @@ bool msm_fence_completed(struct msm_fence_context *fctx, uint32_t fence)
(int32_t)(*fctx->fenceptr - fence) >= 0;
}
/* called from workqueue */
/* called from irq handler and workqueue (in recover path) */
void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence)
{
spin_lock(&fctx->spinlock);
unsigned long flags;
spin_lock_irqsave(&fctx->spinlock, flags);
fctx->completed_fence = max(fence, fctx->completed_fence);
spin_unlock(&fctx->spinlock);
spin_unlock_irqrestore(&fctx->spinlock, flags);
}
struct msm_fence {

View File

@ -439,14 +439,12 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma)
return ret;
}
void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma)
void msm_gem_unpin_locked(struct drm_gem_object *obj)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
GEM_WARN_ON(!msm_gem_is_locked(obj));
msm_gem_unpin_vma(vma);
msm_obj->pin_count--;
GEM_WARN_ON(msm_obj->pin_count < 0);
@ -586,7 +584,8 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj,
msm_gem_lock(obj);
vma = lookup_vma(obj, aspace);
if (!GEM_WARN_ON(!vma)) {
msm_gem_unpin_vma_locked(obj, vma);
msm_gem_unpin_vma(vma);
msm_gem_unpin_locked(obj);
}
msm_gem_unlock(obj);
}

View File

@ -145,7 +145,7 @@ struct msm_gem_object {
uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);
void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);
void msm_gem_unpin_locked(struct drm_gem_object *obj);
struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
struct msm_gem_address_space *aspace);
int msm_gem_get_iova(struct drm_gem_object *obj,
@ -380,7 +380,8 @@ struct msm_gem_submit {
#define BO_VALID 0x8000 /* is current addr in cmdstream correct/valid? */
#define BO_LOCKED 0x4000 /* obj lock is held */
#define BO_ACTIVE 0x2000 /* active refcnt is held */
#define BO_PINNED 0x1000 /* obj is pinned and on active list */
#define BO_OBJ_PINNED 0x1000 /* obj (pages) is pinned and on active list */
#define BO_VMA_PINNED 0x0800 /* vma (virtual address) is pinned */
uint32_t flags;
union {
struct msm_gem_object *obj;

View File

@ -11,6 +11,21 @@
#include "msm_drv.h"
#include "msm_gem.h"
int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
{
int ret;
/* Ensure the mmap offset is initialized. We lazily initialize it,
* so if it has not been first mmap'd directly as a GEM object, the
* mmap offset will not be already initialized.
*/
ret = drm_gem_create_mmap_offset(obj);
if (ret)
return ret;
return drm_gem_prime_mmap(obj, vma);
}
struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);

View File

@ -232,8 +232,11 @@ static void submit_cleanup_bo(struct msm_gem_submit *submit, int i,
*/
submit->bos[i].flags &= ~cleanup_flags;
if (flags & BO_PINNED)
msm_gem_unpin_vma_locked(obj, submit->bos[i].vma);
if (flags & BO_VMA_PINNED)
msm_gem_unpin_vma(submit->bos[i].vma);
if (flags & BO_OBJ_PINNED)
msm_gem_unpin_locked(obj);
if (flags & BO_ACTIVE)
msm_gem_active_put(obj);
@ -244,7 +247,9 @@ static void submit_cleanup_bo(struct msm_gem_submit *submit, int i,
static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i)
{
submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE | BO_LOCKED);
unsigned cleanup_flags = BO_VMA_PINNED | BO_OBJ_PINNED |
BO_ACTIVE | BO_LOCKED;
submit_cleanup_bo(submit, i, cleanup_flags);
if (!(submit->bos[i].flags & BO_VALID))
submit->bos[i].iova = 0;
@ -375,7 +380,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
if (ret)
break;
submit->bos[i].flags |= BO_PINNED;
submit->bos[i].flags |= BO_OBJ_PINNED | BO_VMA_PINNED;
submit->bos[i].vma = vma;
if (vma->iova == submit->bos[i].iova) {
@ -511,7 +516,7 @@ static void submit_cleanup(struct msm_gem_submit *submit, bool error)
unsigned i;
if (error)
cleanup_flags |= BO_PINNED | BO_ACTIVE;
cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED | BO_ACTIVE;
for (i = 0; i < submit->nr_bos; i++) {
struct msm_gem_object *msm_obj = submit->bos[i].obj;
@ -529,7 +534,8 @@ void msm_submit_retire(struct msm_gem_submit *submit)
struct drm_gem_object *obj = &submit->bos[i].obj->base;
msm_gem_lock(obj);
submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE);
/* Note, VMA already fence-unpinned before submit: */
submit_cleanup_bo(submit, i, BO_OBJ_PINNED | BO_ACTIVE);
msm_gem_unlock(obj);
drm_gem_object_put(obj);
}

View File

@ -62,8 +62,7 @@ void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
unsigned size = vma->node.size;
/* Print a message if we try to purge a vma in use */
if (GEM_WARN_ON(msm_gem_vma_inuse(vma)))
return;
GEM_WARN_ON(msm_gem_vma_inuse(vma));
/* Don't do anything if the memory isn't mapped */
if (!vma->mapped)
@ -128,8 +127,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
void msm_gem_close_vma(struct msm_gem_address_space *aspace,
struct msm_gem_vma *vma)
{
if (GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped))
return;
GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped);
spin_lock(&aspace->lock);
if (vma->iova)

View File

@ -164,24 +164,6 @@ int msm_gpu_hw_init(struct msm_gpu *gpu)
return ret;
}
static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
uint32_t fence)
{
struct msm_gem_submit *submit;
unsigned long flags;
spin_lock_irqsave(&ring->submit_lock, flags);
list_for_each_entry(submit, &ring->submits, node) {
if (fence_after(submit->seqno, fence))
break;
msm_update_fence(submit->ring->fctx,
submit->hw_fence->seqno);
dma_fence_signal(submit->hw_fence);
}
spin_unlock_irqrestore(&ring->submit_lock, flags);
}
#ifdef CONFIG_DEV_COREDUMP
static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset,
size_t count, void *data, size_t datalen)
@ -436,9 +418,9 @@ static void recover_worker(struct kthread_work *work)
* one more to clear the faulting submit
*/
if (ring == cur_ring)
fence++;
ring->memptrs->fence = ++fence;
update_fences(gpu, ring, fence);
msm_update_fence(ring->fctx, fence);
}
if (msm_gpu_active(gpu)) {
@ -672,7 +654,6 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
msm_submit_retire(submit);
pm_runtime_mark_last_busy(&gpu->pdev->dev);
pm_runtime_put_autosuspend(&gpu->pdev->dev);
spin_lock_irqsave(&ring->submit_lock, flags);
list_del(&submit->node);
@ -686,6 +667,8 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
msm_devfreq_idle(gpu);
mutex_unlock(&gpu->active_lock);
pm_runtime_put_autosuspend(&gpu->pdev->dev);
msm_gem_submit_put(submit);
}
@ -735,7 +718,7 @@ void msm_gpu_retire(struct msm_gpu *gpu)
int i;
for (i = 0; i < gpu->nr_rings; i++)
update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence);
msm_update_fence(gpu->rb[i]->fctx, gpu->rb[i]->memptrs->fence);
kthread_queue_work(gpu->worker, &gpu->retire_work);
update_sw_cntrs(gpu);

View File

@ -58,7 +58,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
u64 addr = iova;
unsigned int i;
for_each_sg(sgt->sgl, sg, sgt->nents, i) {
for_each_sgtable_sg(sgt, sg, i) {
size_t size = sg->length;
phys_addr_t phys = sg_phys(sg);

View File

@ -25,7 +25,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job)
msm_gem_lock(obj);
msm_gem_unpin_vma_fenced(submit->bos[i].vma, fctx);
submit->bos[i].flags &= ~BO_PINNED;
submit->bos[i].flags &= ~BO_VMA_PINNED;
msm_gem_unlock(obj);
}

View File

@ -7,6 +7,7 @@
*/
#include <linux/component.h>
#include <linux/dma-mapping.h>
#include <linux/kfifo.h>
#include <linux/module.h>
#include <linux/of_graph.h>
@ -73,7 +74,6 @@ static int sun4i_drv_bind(struct device *dev)
goto free_drm;
}
dev_set_drvdata(dev, drm);
drm->dev_private = drv;
INIT_LIST_HEAD(&drv->frontend_list);
INIT_LIST_HEAD(&drv->engine_list);
@ -114,6 +114,8 @@ static int sun4i_drv_bind(struct device *dev)
drm_fbdev_generic_setup(drm, 32);
dev_set_drvdata(dev, drm);
return 0;
finish_poll:
@ -130,6 +132,7 @@ static void sun4i_drv_unbind(struct device *dev)
{
struct drm_device *drm = dev_get_drvdata(dev);
dev_set_drvdata(dev, NULL);
drm_dev_unregister(drm);
drm_kms_helper_poll_fini(drm);
drm_atomic_helper_shutdown(drm);
@ -367,6 +370,13 @@ static int sun4i_drv_probe(struct platform_device *pdev)
INIT_KFIFO(list.fifo);
/*
* DE2 and DE3 cores actually supports 40-bit addresses, but
* driver does not.
*/
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
dma_set_max_seg_size(&pdev->dev, UINT_MAX);
for (i = 0;; i++) {
struct device_node *pipeline = of_parse_phandle(np,
"allwinner,pipelines",

View File

@ -117,7 +117,7 @@ static bool sun4i_layer_format_mod_supported(struct drm_plane *plane,
struct sun4i_layer *layer = plane_to_sun4i_layer(plane);
if (IS_ERR_OR_NULL(layer->backend->frontend))
sun4i_backend_format_is_supported(format, modifier);
return sun4i_backend_format_is_supported(format, modifier);
return sun4i_backend_format_is_supported(format, modifier) ||
sun4i_frontend_format_is_supported(format, modifier);

View File

@ -93,34 +93,10 @@ crtcs_exit:
return crtcs;
}
static int sun8i_dw_hdmi_find_connector_pdev(struct device *dev,
struct platform_device **pdev_out)
{
struct platform_device *pdev;
struct device_node *remote;
remote = of_graph_get_remote_node(dev->of_node, 1, -1);
if (!remote)
return -ENODEV;
if (!of_device_is_compatible(remote, "hdmi-connector")) {
of_node_put(remote);
return -ENODEV;
}
pdev = of_find_device_by_node(remote);
of_node_put(remote);
if (!pdev)
return -ENODEV;
*pdev_out = pdev;
return 0;
}
static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
void *data)
{
struct platform_device *pdev = to_platform_device(dev), *connector_pdev;
struct platform_device *pdev = to_platform_device(dev);
struct dw_hdmi_plat_data *plat_data;
struct drm_device *drm = data;
struct device_node *phy_node;
@ -167,30 +143,16 @@ static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
return dev_err_probe(dev, PTR_ERR(hdmi->regulator),
"Couldn't get regulator\n");
ret = sun8i_dw_hdmi_find_connector_pdev(dev, &connector_pdev);
if (!ret) {
hdmi->ddc_en = gpiod_get_optional(&connector_pdev->dev,
"ddc-en", GPIOD_OUT_HIGH);
platform_device_put(connector_pdev);
if (IS_ERR(hdmi->ddc_en)) {
dev_err(dev, "Couldn't get ddc-en gpio\n");
return PTR_ERR(hdmi->ddc_en);
}
}
ret = regulator_enable(hdmi->regulator);
if (ret) {
dev_err(dev, "Failed to enable regulator\n");
goto err_unref_ddc_en;
return ret;
}
gpiod_set_value(hdmi->ddc_en, 1);
ret = reset_control_deassert(hdmi->rst_ctrl);
if (ret) {
dev_err(dev, "Could not deassert ctrl reset control\n");
goto err_disable_ddc_en;
goto err_disable_regulator;
}
ret = clk_prepare_enable(hdmi->clk_tmds);
@ -245,12 +207,8 @@ err_disable_clk_tmds:
clk_disable_unprepare(hdmi->clk_tmds);
err_assert_ctrl_reset:
reset_control_assert(hdmi->rst_ctrl);
err_disable_ddc_en:
gpiod_set_value(hdmi->ddc_en, 0);
err_disable_regulator:
regulator_disable(hdmi->regulator);
err_unref_ddc_en:
if (hdmi->ddc_en)
gpiod_put(hdmi->ddc_en);
return ret;
}
@ -264,11 +222,7 @@ static void sun8i_dw_hdmi_unbind(struct device *dev, struct device *master,
sun8i_hdmi_phy_deinit(hdmi->phy);
clk_disable_unprepare(hdmi->clk_tmds);
reset_control_assert(hdmi->rst_ctrl);
gpiod_set_value(hdmi->ddc_en, 0);
regulator_disable(hdmi->regulator);
if (hdmi->ddc_en)
gpiod_put(hdmi->ddc_en);
}
static const struct component_ops sun8i_dw_hdmi_ops = {

View File

@ -9,7 +9,6 @@
#include <drm/bridge/dw_hdmi.h>
#include <drm/drm_encoder.h>
#include <linux/clk.h>
#include <linux/gpio/consumer.h>
#include <linux/regmap.h>
#include <linux/regulator/consumer.h>
#include <linux/reset.h>
@ -193,7 +192,6 @@ struct sun8i_dw_hdmi {
struct regulator *regulator;
const struct sun8i_dw_hdmi_quirks *quirks;
struct reset_control *rst_ctrl;
struct gpio_desc *ddc_en;
};
extern struct platform_driver sun8i_hdmi_phy_driver;

View File

@ -248,6 +248,9 @@ void vc4_bo_add_to_purgeable_pool(struct vc4_bo *bo)
{
struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
if (WARN_ON_ONCE(vc4->is_vc5))
return;
mutex_lock(&vc4->purgeable.lock);
list_add_tail(&bo->size_head, &vc4->purgeable.list);
vc4->purgeable.num++;
@ -259,6 +262,9 @@ static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo)
{
struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
if (WARN_ON_ONCE(vc4->is_vc5))
return;
/* list_del_init() is used here because the caller might release
* the purgeable lock in order to acquire the madv one and update the
* madv status.
@ -387,6 +393,9 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct vc4_bo *bo;
if (WARN_ON_ONCE(vc4->is_vc5))
return ERR_PTR(-ENODEV);
bo = kzalloc(sizeof(*bo), GFP_KERNEL);
if (!bo)
return ERR_PTR(-ENOMEM);
@ -413,6 +422,9 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
struct drm_gem_cma_object *cma_obj;
struct vc4_bo *bo;
if (WARN_ON_ONCE(vc4->is_vc5))
return ERR_PTR(-ENODEV);
if (size == 0)
return ERR_PTR(-EINVAL);
@ -471,19 +483,20 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
return bo;
}
int vc4_dumb_create(struct drm_file *file_priv,
int vc4_bo_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct vc4_bo *bo = NULL;
int ret;
if (args->pitch < min_pitch)
args->pitch = min_pitch;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (args->size < args->pitch * args->height)
args->size = args->pitch * args->height;
ret = vc4_dumb_fixup_args(args);
if (ret)
return ret;
bo = vc4_bo_create(dev, args->size, false, VC4_BO_TYPE_DUMB);
if (IS_ERR(bo))
@ -601,8 +614,12 @@ static void vc4_bo_cache_time_work(struct work_struct *work)
int vc4_bo_inc_usecnt(struct vc4_bo *bo)
{
struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
/* Fast path: if the BO is already retained by someone, no need to
* check the madv status.
*/
@ -637,6 +654,11 @@ int vc4_bo_inc_usecnt(struct vc4_bo *bo)
void vc4_bo_dec_usecnt(struct vc4_bo *bo)
{
struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
if (WARN_ON_ONCE(vc4->is_vc5))
return;
/* Fast path: if the BO is still retained by someone, no need to test
* the madv value.
*/
@ -756,6 +778,9 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
struct vc4_bo *bo = NULL;
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
ret = vc4_grab_bin_bo(vc4, vc4file);
if (ret)
return ret;
@ -779,9 +804,13 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_vc4_mmap_bo *args = data;
struct drm_gem_object *gem_obj;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
gem_obj = drm_gem_object_lookup(file_priv, args->handle);
if (!gem_obj) {
DRM_DEBUG("Failed to look up GEM BO %d\n", args->handle);
@ -805,6 +834,9 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
struct vc4_bo *bo = NULL;
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (args->size == 0)
return -EINVAL;
@ -875,11 +907,15 @@ fail:
int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_vc4_set_tiling *args = data;
struct drm_gem_object *gem_obj;
struct vc4_bo *bo;
bool t_format;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (args->flags != 0)
return -EINVAL;
@ -918,10 +954,14 @@ int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,
int vc4_get_tiling_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_vc4_get_tiling *args = data;
struct drm_gem_object *gem_obj;
struct vc4_bo *bo;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (args->flags != 0 || args->modifier != 0)
return -EINVAL;
@ -948,6 +988,9 @@ int vc4_bo_cache_init(struct drm_device *dev)
struct vc4_dev *vc4 = to_vc4_dev(dev);
int i;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
/* Create the initial set of BO labels that the kernel will
* use. This lets us avoid a bunch of string reallocation in
* the kernel's draw and BO allocation paths.
@ -1007,6 +1050,9 @@ int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
struct drm_gem_object *gem_obj;
int ret = 0, label;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (!args->len)
return -EINVAL;

View File

@ -256,7 +256,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
* Removing 1 from the FIFO full level however
* seems to completely remove that issue.
*/
if (!vc4->hvs->hvs5)
if (!vc4->is_vc5)
return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;
@ -389,7 +389,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
if (is_dsi)
CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep);
if (vc4->hvs->hvs5)
if (vc4->is_vc5)
CRTC_WRITE(PV_MUX_CFG,
VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP,
PV_MUX_CFG_RGB_PIXEL_MUX_MODE));
@ -775,17 +775,18 @@ struct vc4_async_flip_state {
struct drm_framebuffer *old_fb;
struct drm_pending_vblank_event *event;
struct vc4_seqno_cb cb;
union {
struct dma_fence_cb fence;
struct vc4_seqno_cb seqno;
} cb;
};
/* Called when the V3D execution for the BO being flipped to is done, so that
* we can actually update the plane's address to point to it.
*/
static void
vc4_async_page_flip_complete(struct vc4_seqno_cb *cb)
vc4_async_page_flip_complete(struct vc4_async_flip_state *flip_state)
{
struct vc4_async_flip_state *flip_state =
container_of(cb, struct vc4_async_flip_state, cb);
struct drm_crtc *crtc = flip_state->crtc;
struct drm_device *dev = crtc->dev;
struct drm_plane *plane = crtc->primary;
@ -802,59 +803,96 @@ vc4_async_page_flip_complete(struct vc4_seqno_cb *cb)
drm_crtc_vblank_put(crtc);
drm_framebuffer_put(flip_state->fb);
/* Decrement the BO usecnt in order to keep the inc/dec calls balanced
* when the planes are updated through the async update path.
* FIXME: we should move to generic async-page-flip when it's
* available, so that we can get rid of this hand-made cleanup_fb()
* logic.
*/
if (flip_state->old_fb) {
struct drm_gem_cma_object *cma_bo;
struct vc4_bo *bo;
cma_bo = drm_fb_cma_get_gem_obj(flip_state->old_fb, 0);
bo = to_vc4_bo(&cma_bo->base);
vc4_bo_dec_usecnt(bo);
if (flip_state->old_fb)
drm_framebuffer_put(flip_state->old_fb);
}
kfree(flip_state);
}
/* Implements async (non-vblank-synced) page flips.
static void vc4_async_page_flip_seqno_complete(struct vc4_seqno_cb *cb)
{
struct vc4_async_flip_state *flip_state =
container_of(cb, struct vc4_async_flip_state, cb.seqno);
struct vc4_bo *bo = NULL;
if (flip_state->old_fb) {
struct drm_gem_cma_object *cma_bo =
drm_fb_cma_get_gem_obj(flip_state->old_fb, 0);
bo = to_vc4_bo(&cma_bo->base);
}
vc4_async_page_flip_complete(flip_state);
/*
* Decrement the BO usecnt in order to keep the inc/dec
* calls balanced when the planes are updated through
* the async update path.
*
* The page flip ioctl needs to return immediately, so we grab the
* modeset semaphore on the pipe, and queue the address update for
* when V3D is done with the BO being flipped to.
* FIXME: we should move to generic async-page-flip when
* it's available, so that we can get rid of this
* hand-made cleanup_fb() logic.
*/
static int vc4_async_page_flip(struct drm_crtc *crtc,
if (bo)
vc4_bo_dec_usecnt(bo);
}
static void vc4_async_page_flip_fence_complete(struct dma_fence *fence,
struct dma_fence_cb *cb)
{
struct vc4_async_flip_state *flip_state =
container_of(cb, struct vc4_async_flip_state, cb.fence);
vc4_async_page_flip_complete(flip_state);
dma_fence_put(fence);
}
static int vc4_async_set_fence_cb(struct drm_device *dev,
struct vc4_async_flip_state *flip_state)
{
struct drm_framebuffer *fb = flip_state->fb;
struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct dma_fence *fence;
int ret;
if (!vc4->is_vc5) {
struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno,
vc4_async_page_flip_seqno_complete);
}
ret = dma_resv_get_singleton(cma_bo->base.resv, DMA_RESV_USAGE_READ, &fence);
if (ret)
return ret;
/* If there's no fence, complete the page flip immediately */
if (!fence) {
vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence);
return 0;
}
/* If the fence has already been completed, complete the page flip */
if (dma_fence_add_callback(fence, &flip_state->cb.fence,
vc4_async_page_flip_fence_complete))
vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence);
return 0;
}
static int
vc4_async_page_flip_common(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags)
{
struct drm_device *dev = crtc->dev;
struct drm_plane *plane = crtc->primary;
int ret = 0;
struct vc4_async_flip_state *flip_state;
struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);
struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
/* Increment the BO usecnt here, so that we never end up with an
* unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the
* plane is later updated through the non-async path.
* FIXME: we should move to generic async-page-flip when it's
* available, so that we can get rid of this hand-made prepare_fb()
* logic.
*/
ret = vc4_bo_inc_usecnt(bo);
if (ret)
return ret;
flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL);
if (!flip_state) {
vc4_bo_dec_usecnt(bo);
if (!flip_state)
return -ENOMEM;
}
drm_framebuffer_get(fb);
flip_state->fb = fb;
@ -881,23 +919,79 @@ static int vc4_async_page_flip(struct drm_crtc *crtc,
*/
drm_atomic_set_fb_for_plane(plane->state, fb);
vc4_queue_seqno_cb(dev, &flip_state->cb, bo->seqno,
vc4_async_page_flip_complete);
vc4_async_set_fence_cb(dev, flip_state);
/* Driver takes ownership of state on successful async commit. */
return 0;
}
/* Implements async (non-vblank-synced) page flips.
*
* The page flip ioctl needs to return immediately, so we grab the
* modeset semaphore on the pipe, and queue the address update for
* when V3D is done with the BO being flipped to.
*/
static int vc4_async_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags)
{
struct drm_device *dev = crtc->dev;
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);
struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
/*
* Increment the BO usecnt here, so that we never end up with an
* unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the
* plane is later updated through the non-async path.
*
* FIXME: we should move to generic async-page-flip when
* it's available, so that we can get rid of this
* hand-made prepare_fb() logic.
*/
ret = vc4_bo_inc_usecnt(bo);
if (ret)
return ret;
ret = vc4_async_page_flip_common(crtc, fb, event, flags);
if (ret) {
vc4_bo_dec_usecnt(bo);
return ret;
}
return 0;
}
static int vc5_async_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags)
{
return vc4_async_page_flip_common(crtc, fb, event, flags);
}
int vc4_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags,
struct drm_modeset_acquire_ctx *ctx)
{
if (flags & DRM_MODE_PAGE_FLIP_ASYNC)
return vc4_async_page_flip(crtc, fb, event, flags);
if (flags & DRM_MODE_PAGE_FLIP_ASYNC) {
struct drm_device *dev = crtc->dev;
struct vc4_dev *vc4 = to_vc4_dev(dev);
if (vc4->is_vc5)
return vc5_async_page_flip(crtc, fb, event, flags);
else
return vc4_async_page_flip(crtc, fb, event, flags);
} else {
return drm_atomic_helper_page_flip(crtc, fb, event, flags, ctx);
}
}
struct drm_crtc_state *vc4_crtc_duplicate_state(struct drm_crtc *crtc)
@ -1149,7 +1243,7 @@ int vc4_crtc_init(struct drm_device *drm, struct vc4_crtc *vc4_crtc,
crtc_funcs, NULL);
drm_crtc_helper_add(crtc, crtc_helper_funcs);
if (!vc4->hvs->hvs5) {
if (!vc4->is_vc5) {
drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r));
drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);

View File

@ -63,6 +63,32 @@ void __iomem *vc4_ioremap_regs(struct platform_device *pdev, int index)
return map;
}
int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args)
{
int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
if (args->pitch < min_pitch)
args->pitch = min_pitch;
if (args->size < args->pitch * args->height)
args->size = args->pitch * args->height;
return 0;
}
static int vc5_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
int ret;
ret = vc4_dumb_fixup_args(args);
if (ret)
return ret;
return drm_gem_cma_dumb_create_internal(file_priv, dev, args);
}
static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
@ -73,6 +99,9 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
if (args->pad != 0)
return -EINVAL;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (!vc4->v3d)
return -ENODEV;
@ -116,11 +145,16 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
static int vc4_open(struct drm_device *dev, struct drm_file *file)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct vc4_file *vc4file;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL);
if (!vc4file)
return -ENOMEM;
vc4file->dev = vc4;
vc4_perfmon_open_file(vc4file);
file->driver_priv = vc4file;
@ -132,6 +166,9 @@ static void vc4_close(struct drm_device *dev, struct drm_file *file)
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct vc4_file *vc4file = file->driver_priv;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
if (vc4file->bin_bo_used)
vc4_v3d_bin_bo_put(vc4);
@ -160,7 +197,7 @@ static const struct drm_ioctl_desc vc4_drm_ioctls[] = {
DRM_IOCTL_DEF_DRV(VC4_PERFMON_GET_VALUES, vc4_perfmon_get_values_ioctl, DRM_RENDER_ALLOW),
};
static struct drm_driver vc4_drm_driver = {
static const struct drm_driver vc4_drm_driver = {
.driver_features = (DRIVER_MODESET |
DRIVER_ATOMIC |
DRIVER_GEM |
@ -175,7 +212,7 @@ static struct drm_driver vc4_drm_driver = {
.gem_create_object = vc4_create_object,
DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_dumb_create),
DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_bo_dumb_create),
.ioctls = vc4_drm_ioctls,
.num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),
@ -189,6 +226,27 @@ static struct drm_driver vc4_drm_driver = {
.patchlevel = DRIVER_PATCHLEVEL,
};
static const struct drm_driver vc5_drm_driver = {
.driver_features = (DRIVER_MODESET |
DRIVER_ATOMIC |
DRIVER_GEM),
#if defined(CONFIG_DEBUG_FS)
.debugfs_init = vc4_debugfs_init,
#endif
DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc5_dumb_create),
.fops = &vc4_drm_fops,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
.date = DRIVER_DATE,
.major = DRIVER_MAJOR,
.minor = DRIVER_MINOR,
.patchlevel = DRIVER_PATCHLEVEL,
};
static void vc4_match_add_drivers(struct device *dev,
struct component_match **match,
struct platform_driver *const *drivers,
@ -212,42 +270,49 @@ static void vc4_match_add_drivers(struct device *dev,
static int vc4_drm_bind(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
const struct drm_driver *driver;
struct rpi_firmware *firmware = NULL;
struct drm_device *drm;
struct vc4_dev *vc4;
struct device_node *node;
struct drm_crtc *crtc;
bool is_vc5;
int ret = 0;
dev->coherent_dma_mask = DMA_BIT_MASK(32);
/* If VC4 V3D is missing, don't advertise render nodes. */
node = of_find_matching_node_and_match(NULL, vc4_v3d_dt_match, NULL);
if (!node || !of_device_is_available(node))
vc4_drm_driver.driver_features &= ~DRIVER_RENDER;
of_node_put(node);
is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5");
if (is_vc5)
driver = &vc5_drm_driver;
else
driver = &vc4_drm_driver;
vc4 = devm_drm_dev_alloc(dev, &vc4_drm_driver, struct vc4_dev, base);
vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base);
if (IS_ERR(vc4))
return PTR_ERR(vc4);
vc4->is_vc5 = is_vc5;
drm = &vc4->base;
platform_set_drvdata(pdev, drm);
INIT_LIST_HEAD(&vc4->debugfs_list);
if (!is_vc5) {
mutex_init(&vc4->bin_bo_lock);
ret = vc4_bo_cache_init(drm);
if (ret)
return ret;
}
ret = drmm_mode_config_init(drm);
if (ret)
return ret;
if (!is_vc5) {
ret = vc4_gem_init(drm);
if (ret)
return ret;
}
node = of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware");
if (node) {
@ -258,7 +323,7 @@ static int vc4_drm_bind(struct device *dev)
return -EPROBE_DEFER;
}
ret = drm_aperture_remove_framebuffers(false, &vc4_drm_driver);
ret = drm_aperture_remove_framebuffers(false, driver);
if (ret)
return ret;

View File

@ -48,6 +48,8 @@ enum vc4_kernel_bo_type {
* done. This way, only events related to a specific job will be counted.
*/
struct vc4_perfmon {
struct vc4_dev *dev;
/* Tracks the number of users of the perfmon, when this counter reaches
* zero the perfmon is destroyed.
*/
@ -74,6 +76,8 @@ struct vc4_perfmon {
struct vc4_dev {
struct drm_device base;
bool is_vc5;
unsigned int irq;
struct vc4_hvs *hvs;
@ -316,6 +320,7 @@ struct vc4_v3d {
};
struct vc4_hvs {
struct vc4_dev *vc4;
struct platform_device *pdev;
void __iomem *regs;
u32 __iomem *dlist;
@ -333,9 +338,6 @@ struct vc4_hvs {
struct drm_mm_node mitchell_netravali_filter;
struct debugfs_regset32 regset;
/* HVS version 5 flag, therefore requires updated dlist structures */
bool hvs5;
};
struct vc4_plane {
@ -580,6 +582,8 @@ to_vc4_crtc_state(struct drm_crtc_state *crtc_state)
#define VC4_REG32(reg) { .name = #reg, .offset = reg }
struct vc4_exec_info {
struct vc4_dev *dev;
/* Sequence number for this bin/render job. */
uint64_t seqno;
@ -701,6 +705,8 @@ struct vc4_exec_info {
* released when the DRM file is closed should be placed here.
*/
struct vc4_file {
struct vc4_dev *dev;
struct {
struct idr idr;
struct mutex lock;
@ -814,7 +820,7 @@ struct vc4_validated_shader_info {
struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size);
struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t size,
bool from_cache, enum vc4_kernel_bo_type type);
int vc4_dumb_create(struct drm_file *file_priv,
int vc4_bo_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args);
int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
@ -885,6 +891,7 @@ static inline void vc4_debugfs_add_regset32(struct drm_device *drm,
/* vc4_drv.c */
void __iomem *vc4_ioremap_regs(struct platform_device *dev, int index);
int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args);
/* vc4_dpi.c */
extern struct platform_driver vc4_dpi_driver;

View File

@ -76,6 +76,9 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
u32 i;
int ret = 0;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (!vc4->v3d) {
DRM_DEBUG("VC4_GET_HANG_STATE with no VC4 V3D probed\n");
return -ENODEV;
@ -386,6 +389,9 @@ vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns,
unsigned long timeout_expire;
DEFINE_WAIT(wait);
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (vc4->finished_seqno >= seqno)
return 0;
@ -468,6 +474,9 @@ vc4_submit_next_bin_job(struct drm_device *dev)
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct vc4_exec_info *exec;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
again:
exec = vc4_first_bin_job(vc4);
if (!exec)
@ -513,6 +522,9 @@ vc4_submit_next_render_job(struct drm_device *dev)
if (!exec)
return;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
/* A previous RCL may have written to one of our textures, and
* our full cache flush at bin time may have occurred before
* that RCL completed. Flush the texture cache now, but not
@ -531,6 +543,9 @@ vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec)
struct vc4_dev *vc4 = to_vc4_dev(dev);
bool was_empty = list_empty(&vc4->render_job_list);
if (WARN_ON_ONCE(vc4->is_vc5))
return;
list_move_tail(&exec->head, &vc4->render_job_list);
if (was_empty)
vc4_submit_next_render_job(dev);
@ -997,6 +1012,9 @@ vc4_job_handle_completed(struct vc4_dev *vc4)
unsigned long irqflags;
struct vc4_seqno_cb *cb, *cb_temp;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
spin_lock_irqsave(&vc4->job_lock, irqflags);
while (!list_empty(&vc4->job_done_list)) {
struct vc4_exec_info *exec =
@ -1033,6 +1051,9 @@ int vc4_queue_seqno_cb(struct drm_device *dev,
struct vc4_dev *vc4 = to_vc4_dev(dev);
unsigned long irqflags;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
cb->func = func;
INIT_WORK(&cb->work, vc4_seqno_cb_work);
@ -1083,8 +1104,12 @@ int
vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_vc4_wait_seqno *args = data;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno,
&args->timeout_ns);
}
@ -1093,11 +1118,15 @@ int
vc4_wait_bo_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
int ret;
struct drm_vc4_wait_bo *args = data;
struct drm_gem_object *gem_obj;
struct vc4_bo *bo;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (args->pad != 0)
return -EINVAL;
@ -1144,6 +1173,9 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
args->shader_rec_size,
args->bo_handle_count);
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (!vc4->v3d) {
DRM_DEBUG("VC4_SUBMIT_CL with no VC4 V3D probed\n");
return -ENODEV;
@ -1167,6 +1199,7 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
DRM_ERROR("malloc failure on exec struct\n");
return -ENOMEM;
}
exec->dev = vc4;
ret = vc4_v3d_pm_get(vc4);
if (ret) {
@ -1276,6 +1309,9 @@ int vc4_gem_init(struct drm_device *dev)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
vc4->dma_fence_context = dma_fence_context_alloc(1);
INIT_LIST_HEAD(&vc4->bin_job_list);
@ -1321,11 +1357,15 @@ static void vc4_gem_destroy(struct drm_device *dev, void *unused)
int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_vc4_gem_madvise *args = data;
struct drm_gem_object *gem_obj;
struct vc4_bo *bo;
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
switch (args->madv) {
case VC4_MADV_DONTNEED:
case VC4_MADV_WILLNEED:

View File

@ -1481,7 +1481,7 @@ vc4_hdmi_encoder_compute_mode_clock(const struct drm_display_mode *mode,
unsigned int bpc,
enum vc4_hdmi_output_format fmt)
{
unsigned long long clock = mode->clock * 1000;
unsigned long long clock = mode->clock * 1000ULL;
if (mode->flags & DRM_MODE_FLAG_DBLCLK)
clock = clock * 2;

View File

@ -220,10 +220,11 @@ u8 vc4_hvs_get_fifo_frame_count(struct vc4_hvs *hvs, unsigned int fifo)
int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)
{
struct vc4_dev *vc4 = hvs->vc4;
u32 reg;
int ret;
if (!hvs->hvs5)
if (!vc4->is_vc5)
return output;
switch (output) {
@ -273,6 +274,7 @@ int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)
static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
struct drm_display_mode *mode, bool oneshot)
{
struct vc4_dev *vc4 = hvs->vc4;
struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
struct vc4_crtc_state *vc4_crtc_state = to_vc4_crtc_state(crtc->state);
unsigned int chan = vc4_crtc_state->assigned_channel;
@ -291,7 +293,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
*/
dispctrl = SCALER_DISPCTRLX_ENABLE;
if (!hvs->hvs5)
if (!vc4->is_vc5)
dispctrl |= VC4_SET_FIELD(mode->hdisplay,
SCALER_DISPCTRLX_WIDTH) |
VC4_SET_FIELD(mode->vdisplay,
@ -312,7 +314,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx |
SCALER_DISPBKGND_AUTOHS |
((!hvs->hvs5) ? SCALER_DISPBKGND_GAMMA : 0) |
((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) |
(interlace ? SCALER_DISPBKGND_INTERLACE : 0));
/* Reload the LUT, since the SRAMs would have been disabled if
@ -617,11 +619,9 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
if (!hvs)
return -ENOMEM;
hvs->vc4 = vc4;
hvs->pdev = pdev;
if (of_device_is_compatible(pdev->dev.of_node, "brcm,bcm2711-hvs"))
hvs->hvs5 = true;
hvs->regs = vc4_ioremap_regs(pdev, 0);
if (IS_ERR(hvs->regs))
return PTR_ERR(hvs->regs);
@ -630,7 +630,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
hvs->regset.regs = hvs_regs;
hvs->regset.nregs = ARRAY_SIZE(hvs_regs);
if (hvs->hvs5) {
if (vc4->is_vc5) {
hvs->core_clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(hvs->core_clk)) {
dev_err(&pdev->dev, "Couldn't get core clock\n");
@ -644,7 +644,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
}
}
if (!hvs->hvs5)
if (!vc4->is_vc5)
hvs->dlist = hvs->regs + SCALER_DLIST_START;
else
hvs->dlist = hvs->regs + SCALER5_DLIST_START;
@ -665,7 +665,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
* between planes when they don't overlap on the screen, but
* for now we just allocate globally.
*/
if (!hvs->hvs5)
if (!vc4->is_vc5)
/* 48k words of 2x12-bit pixels */
drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024);
else

View File

@ -265,6 +265,9 @@ vc4_irq_enable(struct drm_device *dev)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
if (WARN_ON_ONCE(vc4->is_vc5))
return;
if (!vc4->v3d)
return;
@ -279,6 +282,9 @@ vc4_irq_disable(struct drm_device *dev)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
if (WARN_ON_ONCE(vc4->is_vc5))
return;
if (!vc4->v3d)
return;
@ -296,8 +302,12 @@ vc4_irq_disable(struct drm_device *dev)
int vc4_irq_install(struct drm_device *dev, int irq)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (irq == IRQ_NOTCONNECTED)
return -ENOTCONN;
@ -316,6 +326,9 @@ void vc4_irq_uninstall(struct drm_device *dev)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
if (WARN_ON_ONCE(vc4->is_vc5))
return;
vc4_irq_disable(dev);
free_irq(vc4->irq, dev);
}
@ -326,6 +339,9 @@ void vc4_irq_reset(struct drm_device *dev)
struct vc4_dev *vc4 = to_vc4_dev(dev);
unsigned long irqflags;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
/* Acknowledge any stale IRQs. */
V3D_WRITE(V3D_INTCTL, V3D_DRIVER_IRQS);

View File

@ -393,7 +393,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
old_hvs_state->fifo_state[channel].pending_commit = NULL;
}
if (vc4->hvs->hvs5) {
if (vc4->is_vc5) {
unsigned long state_rate = max(old_hvs_state->core_clock_rate,
new_hvs_state->core_clock_rate);
unsigned long core_rate = max_t(unsigned long,
@ -412,7 +412,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
vc4_ctm_commit(vc4, state);
if (vc4->hvs->hvs5)
if (vc4->is_vc5)
vc5_hvs_pv_muxing_commit(vc4, state);
else
vc4_hvs_pv_muxing_commit(vc4, state);
@ -430,7 +430,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
drm_atomic_helper_cleanup_planes(dev, state);
if (vc4->hvs->hvs5) {
if (vc4->is_vc5) {
drm_dbg(dev, "Running the core clock at %lu Hz\n",
new_hvs_state->core_clock_rate);
@ -479,8 +479,12 @@ static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev,
struct drm_file *file_priv,
const struct drm_mode_fb_cmd2 *mode_cmd)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_mode_fb_cmd2 mode_cmd_local;
if (WARN_ON_ONCE(vc4->is_vc5))
return ERR_PTR(-ENODEV);
/* If the user didn't specify a modifier, use the
* vc4_set_tiling_ioctl() state for the BO.
*/
@ -997,11 +1001,15 @@ static const struct drm_mode_config_funcs vc4_mode_funcs = {
.fb_create = vc4_fb_create,
};
static const struct drm_mode_config_funcs vc5_mode_funcs = {
.atomic_check = vc4_atomic_check,
.atomic_commit = drm_atomic_helper_commit,
.fb_create = drm_gem_fb_create,
};
int vc4_kms_load(struct drm_device *dev)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
bool is_vc5 = of_device_is_compatible(dev->dev->of_node,
"brcm,bcm2711-vc5");
int ret;
/*
@ -1009,7 +1017,7 @@ int vc4_kms_load(struct drm_device *dev)
* the BCM2711, but the load tracker computations are used for
* the core clock rate calculation.
*/
if (!is_vc5) {
if (!vc4->is_vc5) {
/* Start with the load tracker enabled. Can be
* disabled through the debugfs load_tracker file.
*/
@ -1025,7 +1033,7 @@ int vc4_kms_load(struct drm_device *dev)
return ret;
}
if (is_vc5) {
if (vc4->is_vc5) {
dev->mode_config.max_width = 7680;
dev->mode_config.max_height = 7680;
} else {
@ -1033,7 +1041,7 @@ int vc4_kms_load(struct drm_device *dev)
dev->mode_config.max_height = 2048;
}
dev->mode_config.funcs = &vc4_mode_funcs;
dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs;
dev->mode_config.helper_private = &vc4_mode_config_helpers;
dev->mode_config.preferred_depth = 24;
dev->mode_config.async_page_flip = true;

View File

@ -17,13 +17,27 @@
void vc4_perfmon_get(struct vc4_perfmon *perfmon)
{
struct vc4_dev *vc4 = perfmon->dev;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
if (perfmon)
refcount_inc(&perfmon->refcnt);
}
void vc4_perfmon_put(struct vc4_perfmon *perfmon)
{
if (perfmon && refcount_dec_and_test(&perfmon->refcnt))
struct vc4_dev *vc4;
if (!perfmon)
return;
vc4 = perfmon->dev;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
if (refcount_dec_and_test(&perfmon->refcnt))
kfree(perfmon);
}
@ -32,6 +46,9 @@ void vc4_perfmon_start(struct vc4_dev *vc4, struct vc4_perfmon *perfmon)
unsigned int i;
u32 mask;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon))
return;
@ -49,6 +66,9 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon,
{
unsigned int i;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
if (WARN_ON_ONCE(!vc4->active_perfmon ||
perfmon != vc4->active_perfmon))
return;
@ -64,8 +84,12 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon,
struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)
{
struct vc4_dev *vc4 = vc4file->dev;
struct vc4_perfmon *perfmon;
if (WARN_ON_ONCE(vc4->is_vc5))
return NULL;
mutex_lock(&vc4file->perfmon.lock);
perfmon = idr_find(&vc4file->perfmon.idr, id);
vc4_perfmon_get(perfmon);
@ -76,8 +100,14 @@ struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)
void vc4_perfmon_open_file(struct vc4_file *vc4file)
{
struct vc4_dev *vc4 = vc4file->dev;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
mutex_init(&vc4file->perfmon.lock);
idr_init_base(&vc4file->perfmon.idr, VC4_PERFMONID_MIN);
vc4file->dev = vc4;
}
static int vc4_perfmon_idr_del(int id, void *elem, void *data)
@ -91,6 +121,11 @@ static int vc4_perfmon_idr_del(int id, void *elem, void *data)
void vc4_perfmon_close_file(struct vc4_file *vc4file)
{
struct vc4_dev *vc4 = vc4file->dev;
if (WARN_ON_ONCE(vc4->is_vc5))
return;
mutex_lock(&vc4file->perfmon.lock);
idr_for_each(&vc4file->perfmon.idr, vc4_perfmon_idr_del, NULL);
idr_destroy(&vc4file->perfmon.idr);
@ -107,6 +142,9 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data,
unsigned int i;
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (!vc4->v3d) {
DRM_DEBUG("Creating perfmon no VC4 V3D probed\n");
return -ENODEV;
@ -127,6 +165,7 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data,
GFP_KERNEL);
if (!perfmon)
return -ENOMEM;
perfmon->dev = vc4;
for (i = 0; i < req->ncounters; i++)
perfmon->events[i] = req->events[i];
@ -157,6 +196,9 @@ int vc4_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_vc4_perfmon_destroy *req = data;
struct vc4_perfmon *perfmon;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (!vc4->v3d) {
DRM_DEBUG("Destroying perfmon no VC4 V3D probed\n");
return -ENODEV;
@ -182,6 +224,9 @@ int vc4_perfmon_get_values_ioctl(struct drm_device *dev, void *data,
struct vc4_perfmon *perfmon;
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (!vc4->v3d) {
DRM_DEBUG("Getting perfmon no VC4 V3D probed\n");
return -ENODEV;

View File

@ -489,10 +489,10 @@ static u32 vc4_lbm_size(struct drm_plane_state *state)
}
/* Align it to 64 or 128 (hvs5) bytes */
lbm = roundup(lbm, vc4->hvs->hvs5 ? 128 : 64);
lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64);
/* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */
lbm /= vc4->hvs->hvs5 ? 4 : 2;
lbm /= vc4->is_vc5 ? 4 : 2;
return lbm;
}
@ -608,7 +608,7 @@ static int vc4_plane_allocate_lbm(struct drm_plane_state *state)
ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm,
&vc4_state->lbm,
lbm_size,
vc4->hvs->hvs5 ? 64 : 32,
vc4->is_vc5 ? 64 : 32,
0, 0);
spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);
@ -917,7 +917,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE &&
fb->format->has_alpha;
if (!vc4->hvs->hvs5) {
if (!vc4->is_vc5) {
/* Control word */
vc4_dlist_write(vc4_state,
SCALER_CTL0_VALID |
@ -1321,6 +1321,10 @@ static int vc4_plane_atomic_async_check(struct drm_plane *plane,
old_vc4_state = to_vc4_plane_state(plane->state);
new_vc4_state = to_vc4_plane_state(new_plane_state);
if (!new_vc4_state->hw_dlist)
return -EINVAL;
if (old_vc4_state->dlist_count != new_vc4_state->dlist_count ||
old_vc4_state->pos0_offset != new_vc4_state->pos0_offset ||
old_vc4_state->pos2_offset != new_vc4_state->pos2_offset ||
@ -1385,6 +1389,13 @@ static const struct drm_plane_helper_funcs vc4_plane_helper_funcs = {
.atomic_async_update = vc4_plane_atomic_async_update,
};
static const struct drm_plane_helper_funcs vc5_plane_helper_funcs = {
.atomic_check = vc4_plane_atomic_check,
.atomic_update = vc4_plane_atomic_update,
.atomic_async_check = vc4_plane_atomic_async_check,
.atomic_async_update = vc4_plane_atomic_async_update,
};
static bool vc4_format_mod_supported(struct drm_plane *plane,
uint32_t format,
uint64_t modifier)
@ -1453,14 +1464,13 @@ static const struct drm_plane_funcs vc4_plane_funcs = {
struct drm_plane *vc4_plane_init(struct drm_device *dev,
enum drm_plane_type type)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_plane *plane = NULL;
struct vc4_plane *vc4_plane;
u32 formats[ARRAY_SIZE(hvs_formats)];
int num_formats = 0;
int ret = 0;
unsigned i;
bool hvs5 = of_device_is_compatible(dev->dev->of_node,
"brcm,bcm2711-vc5");
static const uint64_t modifiers[] = {
DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED,
DRM_FORMAT_MOD_BROADCOM_SAND128,
@ -1476,7 +1486,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
return ERR_PTR(-ENOMEM);
for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) {
if (!hvs_formats[i].hvs5_only || hvs5) {
if (!hvs_formats[i].hvs5_only || vc4->is_vc5) {
formats[num_formats] = hvs_formats[i].drm;
num_formats++;
}
@ -1490,6 +1500,9 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
if (ret)
return ERR_PTR(ret);
if (vc4->is_vc5)
drm_plane_helper_add(plane, &vc5_plane_helper_funcs);
else
drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
drm_plane_create_alpha_property(plane);

View File

@ -593,11 +593,15 @@ vc4_rcl_render_config_surface_setup(struct vc4_exec_info *exec,
int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct vc4_rcl_setup setup = {0};
struct drm_vc4_submit_cl *args = exec->args;
bool has_bin = args->bin_cl_size != 0;
int ret;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
if (args->min_x_tile > args->max_x_tile ||
args->min_y_tile > args->max_y_tile) {
DRM_DEBUG("Bad render tile set (%d,%d)-(%d,%d)\n",

View File

@ -127,6 +127,9 @@ static int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused)
int
vc4_v3d_pm_get(struct vc4_dev *vc4)
{
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
mutex_lock(&vc4->power_lock);
if (vc4->power_refcount++ == 0) {
int ret = pm_runtime_get_sync(&vc4->v3d->pdev->dev);
@ -145,6 +148,9 @@ vc4_v3d_pm_get(struct vc4_dev *vc4)
void
vc4_v3d_pm_put(struct vc4_dev *vc4)
{
if (WARN_ON_ONCE(vc4->is_vc5))
return;
mutex_lock(&vc4->power_lock);
if (--vc4->power_refcount == 0) {
pm_runtime_mark_last_busy(&vc4->v3d->pdev->dev);
@ -172,6 +178,9 @@ int vc4_v3d_get_bin_slot(struct vc4_dev *vc4)
uint64_t seqno = 0;
struct vc4_exec_info *exec;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
try_again:
spin_lock_irqsave(&vc4->job_lock, irqflags);
slot = ffs(~vc4->bin_alloc_used);
@ -316,6 +325,9 @@ int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used)
{
int ret = 0;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
mutex_lock(&vc4->bin_bo_lock);
if (used && *used)
@ -348,6 +360,9 @@ static void bin_bo_release(struct kref *ref)
void vc4_v3d_bin_bo_put(struct vc4_dev *vc4)
{
if (WARN_ON_ONCE(vc4->is_vc5))
return;
mutex_lock(&vc4->bin_bo_lock);
kref_put(&vc4->bin_bo_kref, bin_bo_release);
mutex_unlock(&vc4->bin_bo_lock);

View File

@ -105,9 +105,13 @@ size_is_lt(uint32_t width, uint32_t height, int cpp)
struct drm_gem_cma_object *
vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex)
{
struct vc4_dev *vc4 = exec->dev;
struct drm_gem_cma_object *obj;
struct vc4_bo *bo;
if (WARN_ON_ONCE(vc4->is_vc5))
return NULL;
if (hindex >= exec->bo_count) {
DRM_DEBUG("BO index %d greater than BO count %d\n",
hindex, exec->bo_count);
@ -160,10 +164,14 @@ vc4_check_tex_size(struct vc4_exec_info *exec, struct drm_gem_cma_object *fbo,
uint32_t offset, uint8_t tiling_format,
uint32_t width, uint32_t height, uint8_t cpp)
{
struct vc4_dev *vc4 = exec->dev;
uint32_t aligned_width, aligned_height, stride, size;
uint32_t utile_w = utile_width(cpp);
uint32_t utile_h = utile_height(cpp);
if (WARN_ON_ONCE(vc4->is_vc5))
return false;
/* The shaded vertex format stores signed 12.4 fixed point
* (-2048,2047) offsets from the viewport center, so we should
* never have a render target larger than 4096. The texture
@ -482,10 +490,14 @@ vc4_validate_bin_cl(struct drm_device *dev,
void *unvalidated,
struct vc4_exec_info *exec)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
uint32_t len = exec->args->bin_cl_size;
uint32_t dst_offset = 0;
uint32_t src_offset = 0;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
while (src_offset < len) {
void *dst_pkt = validated + dst_offset;
void *src_pkt = unvalidated + src_offset;
@ -926,9 +938,13 @@ int
vc4_validate_shader_recs(struct drm_device *dev,
struct vc4_exec_info *exec)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
uint32_t i;
int ret = 0;
if (WARN_ON_ONCE(vc4->is_vc5))
return -ENODEV;
for (i = 0; i < exec->shader_state_count; i++) {
ret = validate_gl_shader_rec(dev, exec, &exec->shader_state[i]);
if (ret)

View File

@ -778,6 +778,7 @@ vc4_handle_branch_target(struct vc4_shader_validation_state *validation_state)
struct vc4_validated_shader_info *
vc4_validate_shader(struct drm_gem_cma_object *shader_obj)
{
struct vc4_dev *vc4 = to_vc4_dev(shader_obj->base.dev);
bool found_shader_end = false;
int shader_end_ip = 0;
uint32_t last_thread_switch_ip = -3;
@ -785,6 +786,9 @@ vc4_validate_shader(struct drm_gem_cma_object *shader_obj)
struct vc4_validated_shader_info *validated_shader = NULL;
struct vc4_shader_validation_state validation_state;
if (WARN_ON_ONCE(vc4->is_vc5))
return NULL;
memset(&validation_state, 0, sizeof(validation_state));
validation_state.shader = shader_obj->vaddr;
validation_state.max_ip = shader_obj->base.size / sizeof(uint64_t);