This feature was previously added for the 8960 sglte target,
in commit 21274cf221, but it also
applies to the 8064 sglte2 target. The original commit message
follows.
During SSR first a reset of external modem is issued
and mdm_do_soft_power_on() toggles ap2mdm_soft_reset which
in turn toogles the PS_HOLD. Then a part of SSR external
modem is powered up and mdm_do_soft_power_on() again toggles
the gpio. For PMIC register stabilization we need a 1sec delay
between subsequent mdm_do_soft_power_on().
By default the delay is 500msec, so adding another 500msec for
stablization.
Signed-off-by: Joel King <joelking@codeaurora.org>
Conflicts:
arch/arm/mach-msm/board-8064.c
Change-Id: I8ac22d94205cb309dd5ba4b1f3a20933632298db
What an annoying bug...
To reproduce:
* Turn device off, unplugged
* Plug in, allow offmode charging to do its thing
* Once on the black screen, turn device on
* It'll hang on the splash screen until you pull the battery
Likely this will fix the random occurances of this too. That is yet to
be verified though.
Change-Id: Id3c419d93e7ae73c6b7d34e989483003cfece7f3
According to the eMMC 4.5 spec "Flushing a large amount of cached data may
take very unpredictably long time". Therefore the timeout for FLUSH should
be increased to prevent timeouts.
In case the timeout occurs HPI issued.
CRs-Fixed: 500874
Change-Id: Ib00d087d3fe2fa72f5eac096976d3f24b5e4966a
Signed-off-by: Maya Erez <merez@codeaurora.org>
Signed-off-by: Konstantin Dorfman <kdorfman@codeaurora.org>
Some of the time consuming operations such as BKOPS, SANITIZE, CACHE
flush/off use the SWITCH command (CMD6) but as these operations don't
have card specification defined timeout for completion, we may see
timeout errors if card doesn't complete the operation within the SW
defined timeout. If SW defined timeout is hit, above operations are
considered to be failed and no real recovery mechanism is implemented
after timeout.
Most of the above operations (BKOPS/SANITIZE/CACHE flush/off) can be
interrupted by sending the HPI (High Priority Interrupt) command to card
if they taking longer than expected. This change adds the base support
which will these operations to be HPIed after timeout.
Change-Id: Ibd9061525756aaae656b1ceeeaed62e04fb80cce
Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org>
When the clock scaling state is changed from MMC_LOAD_LOW to
MMC_LOAD_HIGH, the clocks are first scaled up and then tuning
is performed. But in case of tuning failure, the current code
does nothing but still retain the previous clock scale stats
(state and curr_freq within struct clk_scaling). Hence, correct
it to scale down the clocks in case of tuning failure so that
clock scaling stats reflect the correct status. This also helps
proceed with data transfers at lower clock rate in such cases.
Change-Id: I7e9379d1e3ddc863132af31019604c22a42f8d59
Signed-off-by: Sahitya Tummala <stummala@codeaurora.org>
The original intention of polling for BKOPS completion was to give
the card enough time to perform the BKOPS before it is runtime suspended.
But as the BKOPS completion polling was happening in a different
context, it may race with card runtime/platform suspend which is quite
difficult to fix. So instead of BKOPS polling, let the runtime suspend
get deferred if the BKOPS is running on the card. Also if BKOPS is running
when platform suspend is triggered, stop the BKOPS before suspending the
card.
Conflicts:
drivers/mmc/core/core.c
include/linux/mmc/card.h
CRs-Fixed: 489523
Change-Id: I21e524dc2da37c4985c210abfaca00a28049c651
Signed-off-by: Maya Erez <merez@codeaurora.org>
Signed-off-by: Subhash Jadavani <subhashj@codeaurora.org>
The watermark check consists of two sub-checks. The first one is:
if (free_pages <= min + lowmem_reserve)
return false;
The check assures that there is minimal amount of RAM in the zone. If
CMA is used then the free_pages is reduced by the number of free pages
in CMA prior to the over-mentioned check.
if (!(alloc_flags & ALLOC_CMA))
free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES);
This prevents the zone from being drained from pages available for
non-movable allocations.
The second check prevents the zone from getting too fragmented.
for (o = 0; o < order; o++) {
free_pages -= z->free_area[o].nr_free << o;
min >>= 1;
if (free_pages <= min)
return false;
}
The field z->free_area[o].nr_free is equal to the number of free pages
including free CMA pages. Therefore the CMA pages are subtracted twice.
This may cause a false positive fail of __zone_watermark_ok() if the CMA
area gets strongly fragmented. In such a case there are many 0-order
free pages located in CMA. Those pages are subtracted twice therefore
they will quickly drain free_pages during the check against
fragmentation. The test fails even though there are many free non-cma
pages in the zone.
This patch fixes this issue by subtracting CMA pages only for a purpose of
(free_pages <= min + lowmem_reserve) check.
Laura said:
We were observing allocation failures of higher order pages (order 5 =
128K typically) under tight memory conditions resulting in driver
failure. The output from the page allocation failure showed plenty of
free pages of the appropriate order/type/zone and mostly CMA pages in
the lower orders.
For full disclosure, we still observed some page allocation failures
even after applying the patch but the number was drastically reduced and
those failures were attributed to fragmentation/other system issues.
Change-Id: Ic2c0c233993c41c630e24d71df5e12aa614588e5
CRs-Fixed:549847
Signed-off-by: Tomasz Stanislawski <t.stanislaws@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: <stable@vger.kernel.org> [3.7+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 026b08147923142e925a7d0aaa39038055ae0156
Git-repo: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
Add support for MDP_YCBYCR_H2V1 interleaved YUV format in rotator
block.
Change-Id: I4bb192aaab1e72f6e5687ae222a5f9ea2c254bd4
Signed-off-by: Mayank Chopra <makchopra@codeaurora.org>
Add support of MDP_YCBYCR_H2V1 interleaved YUV format to MDP
for a-family targets.
Change-Id: I5afb84a95693d1ced114152364782a10c4d56bc2
Signed-off-by: Mayank Chopra <makchopra@codeaurora.org>
Some Cisco phones create huge messages that are spread over multiple packets.
After calculating the offset of the SIP body, it is validated to be within
the packet and the packet is dropped otherwise. This breaks operation of
these phones. Since connection tracking is supposed to be passive, just let
those packets pass unmodified and untracked.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Conflicts:
net/netfilter/nf_conntrack_sip.c
This commit appears in the 3.8 and 3.9 branches of the Linux kernel
and according to feanor3 on xda-developers:
The "Cisco Jabber" app lets you use your cell phone as a SIP
endpoint with your work number on a Cisco phone system. The
registration packets that Cisco uses are apparently larger
than normal. With your kernel, the registration does not complete.
With your kernel and line 1421 changed to NF_ACCEPT, the registration
complete
Change-Id: If0c4eff68fa10af43767ad49808394910cae4309
Asynchronous I/O latency to a solid-state disk greatly increased
between the 2.6.32 and 3.0 kernels. By removing the plug from
do_io_submit(), we observed a 34% improvement in the I/O latency.
Unfortunately, at this level, we don't know if the request is to
a rotating disk or not.
Change-Id: I7101df956473ed9fd5dcff18e473dd93b688a5c1
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: linux-aio@kvack.org
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Add ftrace event trace_sched_cpu_hotplug to track cpu
hot-add and hot-remove events.
This is useful in a variety of power, performance and
debug analysis scenarios.
Change-Id: I5d202c7a229ffacc3aafb7cf9afee0b0ee7b0931
Signed-off-by: Arun Bharadwaj <abharadw@codeaurora.org>
watchdog driver creates per-cpu kernel threads that are bound to
separate cpus. Use of smp_processor_id() by such threads in
__touch_watchdog() is perfectly fine, as they are generally pinned to
the cpu on which they are running.
During cpu offline event, however, affinity for the watchdog thread
bound to dying cpu is broken before it is killed. There is a small
time window between affinity broken and thread dying in which thread
can run on a cpu with its affinity not set to that cpu exclusively.
This will trigger a warning from use of smp_processor_id(), which is
harmless in this case as it will provide the required heartbeat on cpu
where it is running and moreover that thread will be shortly killed.
Mute the harmless warning by use of raw_smp_processor_id() in
__touch_watchdog(). This seems less intrusive fix than killing threads
in CPU_DOWN_PREPARE event handler.
Change-Id: I7fa22ff529aeea0ec3d5610cfec87aea92cf95a0
CRs-Fixed: 517188
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
migrate_tasks() uses _pick_next_task_rt() to get tasks from the
real-time runqueues to be migrated. When rt_rq is throttled
_pick_next_task_rt() won't return anything, in which case
migrate_tasks() can't move all threads over and gets stuck in an
infinite loop.
Instead unthrottle rt runqueues before migrating tasks.
Additionally: move unthrottle_offline_cfs_rqs() to rq_offline_fair()
Change-Id: If8a4a399f1a14b7f4789c1b205dcfadbde555214
Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/r/5FBF8E85CA34454794F0F7ECBA79798F379D3648B7@HQMAIL04.nvidia.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Git-commit: a4c96ae319b8047f62dedbe1eac79e321c185749
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
cpaccess uses sysdev which is being deprecated.
Move the cpaccess node out of the /sys/devices/system
directory to /sys/devices. Temporarily leave the original
sysdev node for migration purposes, until userspace
applications can be moved to the new node.
Change-Id: Iacc776968f892fc6c6463764e576d987e4371716
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
This framework wasn't accepted upstream and is not used. Drop it.
Change-Id: Ieb381a679873cbfb4baf245a5bcb8df1c730d964
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
arch/arm/mm/init.c: In function 'arm_memblock_init':
arch/arm/mm/init.c:380: warning: comparison of distinct pointer types lacks a cast
by fixing the typecast in its definition when DMA_ZONE is disabled.
This was missed in 4986e5c7c (ARM: mm: fix type of the arm_dma_limit
global variable).
Change-Id: Id076f2bebe307609265afdd4229181d2004c5f9c
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
This reverts commit f27e4f0e73.
An alternate fix has been proposed where userspace programs
set ADDR_COMPAT_LAYOUT. Bring back the topdown mmap support.
Change-Id: Ibd6e74d406db3a5ddb609e2d2a7a6e9dc2080eca
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
The platform data was requesting the wrong number,
it should request the first perf irq item.
Change-Id: I4a25b4704ed9e76172c6b0d4ca4b28a3286ab2ad
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
When a CPU is hotplugged out while a perf session
is active, disarm the IRQ when the CPU is preparing
to die. This ensures that perf doesn't lock up when
it tries to free the irq of a hotplugged CPU.
Similarly, when a CPU comes online during a perf session
enable the IRQ so that perf doesn't try to disable
an unarmed IRQ when it finishes.
Change-Id: Ic4e412e5f1effae0db34a3e4b5e7e5c65faed2a0
Signed-off-by: Ashwin Chaugule <ashwinc@codeaurora.org>
This is in preparation for adding support for the unicore A5
and dualcore A5, both of which have the same MIDR value.
Instead of adding extra parsing to the ARM generic perf_event file,
this patch moves it to the 'mach' directory where targets types
can be detected in an implementation specific manner.
The default behavior is maintained for all other ARM targets.
Change-Id: I041937273dbbd0fa4c602cf89a2e0fee7f73342b
Signed-off-by: Ashwin Chaugule <ashwinc@codeaurora.org>
Remove target name from IRQ string, since the percpu IRQ API
is shared with all Qcomm targets.
Change-Id: Id0e7d9267654b373e9360806900259e00bf5a0ab
Signed-off-by: Ashwin Chaugule <ashwinc@codeaurora.org>
The MSM SoC's which have ARM's CPU's can power collapse. Ensure
the CPU side PMU's correctly restore the counters after coming
out of power collapse.
Change-Id: I544a1dd8ced26f726ba115d14867d9e34c2a7944
Signed-off-by: Ashwin Chaugule <ashwinc@codeaurora.org>
Provide a mechanism to track which msm perf patches
are present in a kernel. Some kernel branches do
not include all patches which causes problems trying
to debug perf issues. This framework provides a way
to keep track of which patches have been included
in a build.
Change-Id: Ib8ef311454564c4609d94decd93e039c80104275
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
8x50 is no longer supported in the msm-3.4 kernel, so remove this
feature.
Change-Id: I2156ef22cca82d3cce6a7d39e366b93cab32f811
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
The array is not defined for the maximum possible size.
So it is possible that the array may overrun when it is
dereferenced using MSM_PM_MODE macro.
Add a dummy entry to make sure array is of max size.
Change-Id: I5783b0aa6c8295c5c5aabcb498700c6af1a3eba5
Signed-off-by: Venkat Devarasetty <vdevaras@codeaurora.org>
A deprecated api can be used to limit the minimum and maximum cpu speeds.
Another driver temporarily needs to switch back to this api, instead of
using the new version. Make this api functional again by ensuring a stale
value is updated properly.
Change-Id: Ia63772e43ec534cf2bb58e7627a87315dae5d891
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
Add support for using clock APIs to do CPU/L2 frequency scaling. When clock
APIs are used to control CPU/L2 frequencies, the cpufreq driver must handle
the CPU -> L2 -> DDR bandwidth voting. Use a msm-cpufreq device to provide
the per SoC list of CPU frequencies and their L2/DDR bandwidth mapping
information.
If CPU/L2 clocks are not available, fallback to using acpuclock APIs. We
eventually want to remove the use of non-standard acpuclock APIs.
Change-Id: I2c4e2c3967d73e8cdbd9833f3cb36f3d75e27b4a
Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
Add support for JTag Fuse driver which can be used by other
JTag save-restore driver(s) to query the state of the jtag fuses to
determine if the Hardware they manage is functionally disabled or
not.
Drivers can then take necessary actions like failing the probe if
the Hardware they manage is functionally disabled.
Change-Id: Ie8c0dc159e52cf869d3ed63b45e5332d3e380e6d
Signed-off-by: Pratik Patel <pratikp@codeaurora.org>
Early suspend is deprecated in the framework but disabling it
completely can affect other subsystems. Disabling only for the
MDP driver.
Signed-off-by: Naseer Ahmed <naseer@codeaurora.org>
'const void *' is a safer type for caller function type. This patch
updates all references to caller function type.
Change-Id: If950cfcfc63911756ac3709c8bf6da10c8b98f1b
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Git-commit: 5e6cafc83e30f0f70c79a2b7aef237dc57e29f02
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
This patch changes dma-mapping subsystem to use generic vmalloc areas
for all consistent dma allocations. This increases the total size limit
of the consistent allocations and removes platform hacks and a lot of
duplicated code.
Atomic allocations are served from special pool preallocated on boot,
because vmalloc areas cannot be reliably created in atomic context.
Change-Id: Ibb2230e80249598a81122083bf3fa2f050a0a71e
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Git-commit: e9da6e9905e639b0f842a244bc770b48ad0523e9
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[lauraa@codeaurora.org: Context fixups and tweaking of some prototypes]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
The existing code left the HFPLL enabled in hfpll_init(), even if
the CPU was detected to be at a non-HFPLL rate. Correct this for
power savings between when hfpll_init() and the first runtime CPU
frequency switch happen. This also ensure votes for HFPLL
regulators are not left unnecessarily asserted.
Change-Id: Iaca5dc7e4769bdbd494d669726ba9b500256f793
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
SPARSEMEM is no longer needed since the features that required it
(memory hotplug, for example) have been superseded by other
solutions. Remove it since it introduces overhead.
CRs-Fixed: 430996
Change-Id: I25ff8591dae1e48b5b0bf8a0669196a6d7d0cd85
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
The unmap api is currently not handling unmapping of page table
entries (PTE) properly. The generic function that calls the msm
unmap API expects the unmap call to unmap as much as possible
and then return the amount that was unmapped.
In addition the unmap function does not support an arbitrary input
length. However, the function that calls the msm unmap function
assumes that this is supported.
Both these issues can cause mappings to not be unmapped which will
cause subsequent mappings to fail because the mapping already exists.
Change-Id: I638d5c38673abe297a701de9b7209c962564e1f1
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
If msm_iommu_map_range() fails mid way through the va
range with an error, clean up the PTEs that have already
been created so they are not leaked.
Change-Id: Ie929343cd6e36cade7b2cc9b4b4408c3453e6b5f
CRs-Fixed: 478304
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Currently, the iommu page table code treats a scattergather
list with physical address 0 as an error. This may not be
correct in all cases. Physical address 0 is a valid part
of the system and may be used for valid page allocations.
Nothing else in the system checks for physical address 0
for error so don't treat it as an error.
Change-Id: Ie9f0dae9dace4fff3b1c3449bc89c3afdd2e63a0
CRs-Fixed: 478304
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Make sure iommu_map_range() does not leave a partial
mapping on error if part of the range is already mapped.
Change-Id: I108b45ce8935b73ecb65f375930fe5e00b8d91eb
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
IOMMU map and unmap function should be using phys_addr_t
instead of unsigned int which will not work properly with
LPAE.
Change-Id: I22b31b4f13a27c0280b0d88643a8a30d019e6e90
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Allow the IOMMUv1 to use 16M, 1M, 64K or 4K iommu
pages when physical and virtual addresses are
appropriately aligned. This can reduce TLB misses
when large buffers are mapped.
Change-Id: Iffcaa04097fc3877962f3954d73a6ba448dca20b
Signed-off-by: Kevin Matlage <kmatlage@codeaurora.org>