Age | Commit message (Collapse) | Author |
|
am: 158f084de4
Change-Id: Iceba2489e1d528b98713bb3551ddf037303f0a7d
|
|
Bug: 131864803
Test: New bionic unit test to make sure that mallinfo and malloc_info match.
Change-Id: Id2fd0c38106fa0150ff6baae538ecaea356296ec
|
|
am: 801fe3af80
Change-Id: I883791d5235bae93594310a5aaef58fc9e14d877
|
|
Removing the stats make the whole cache structure fit in a single page.
Bug: 131362671
Test: Verified that all bionic malloc benchmarks are still the same.
Test: It turns out that the malloc_sql benchmarks seem to get faster.
Test: Verified that after this change, it saves about 2K PSS per thread.
Change-Id: I4dcd633543f05f1a9d47db175f9977ddb42188a9
(cherry picked from commit a8b52518654725e64def5ebbd0ed472100e5a522)
|
|
It was found that keeping just a few allocations of the same size around
(even up to only 3) increases the allocation time significantly. This
appears to be because I set the minimum tcache entries to 1. Removing this
and the performance comes mostly all of the way back. There is still
some loss, but probably just due to a different bin layout than jemalloc
4.x.
This does increase the PSS slightly for many processes. About 1MB
in my trace runs. However, it increases the performance of most
multiple allocations by a factor of three.
Bug: 129743239
Test: Ran memory_replay on all traces.
Test: Ran new bionic benchmarks to verify things get better.
Change-Id: Iff41d5b5002919c4df03fcb7d742e501f9e38b8e
Merged-In: Iff41d5b5002919c4df03fcb7d742e501f9e38b8e
(cherry picked from commit 0af4ee86dfa643feb786d659dbb219133c776a34)
|
|
am: 69a24699a4
Change-Id: Iefbbbefa572a090befc750ccea46ee8cbdc3c086
|
|
Change-Id: I31e10da5bb3c0ee66b71a2f69c010acfaeeef6db
BUG: 129345239
|
|
am: 766934fd92
Change-Id: Ic95f585264bf0b8ab7f75b8a04c04bc5ca779e17
|
|
am: 08d6c0f845
Change-Id: I25764bf0f640d5cd24331b2dba9715860fdf4faf
|
|
am: 46971b81ee
Change-Id: I2c52b0d542aa1c273857486d312675f63625a6dc
|
|
Bug: 124264835
Test: Ran bionic unit tests.
Test: Ran dumpsys -t 6000 meminfo --unreachable of chrome process without
Test: crashing.
Change-Id: I2cc66e443fa278621b9878a888c74f38efcb79eb
|
|
am: 27c2c8bead
Change-Id: I421cf23cfc8d2bb0280546c2217861ecc6d93f14
|
|
am: b4e426ffeb
Change-Id: I7cf232a3f365c7563ee96b415f898a1d6842254e
|
|
am: cc374363ec
Change-Id: Ia44573af27aa921a31bd2e798066b43dc1d72d10
|
|
A jemalloc user reported that the best fit selection is causing them
a memory leak. This code has been completely removed from the next
release of jemalloc (5.2.0), so remove it since it doesn't have any
real benefit.
See https://github.com/jemalloc/jemalloc/issues/1454
Running the memory dumps, removing best fit appears to be a win: it is
slightly faster and has the same PSS/VA.
Bug: 128697497
Test: Ran jemalloc unit tests.
Test: Ran memory dumps in 32 bit and 64 bit and observed that the PSS
Test: and VA stayed the same, while run time improved slightly.
Change-Id: I98a8ddf2cea837c8ade1afd4a998960c253d3932
|
|
am: 0b13d0583d
Change-Id: I5257268590878a34464f9028bc9b4b667828e953
|
|
Bug: 126125118
Change-Id: I0485d2754a7a36d6b228f2926d2fc06e96a08c11
|
|
Bug: 120848293
Change-Id: I38bf95211e9e25ff919ac7260cc7822bdaee7948
|
|
am: 1078c77e37
Change-Id: I1b4d6abc8d7f3d2d2f1779edf66717f17cac4bd8
|
|
am: 4135abe739
Change-Id: I9287434a0071573eb6766c5dcfb4c8f0af7a5627
|
|
am: 347192e6aa
Change-Id: I69b6d3f3e0b0cc84a0cffc3945c2e61e667a97b4
|
|
We don't currently use this and it causes libc.a to have a dependency
on libdl because it interposes pthread_create with dlsym.
Test: treehugger
Bug: None
Change-Id: I259ed5eb8e72045430aee90df1124c1906512fcd
|
|
am: bb955dfb55
Change-Id: I4207473a67504d09dfac4cc7ac27579ddb7ab537
|
|
am: 7a1cc0cb97
Change-Id: Id7543021974d21db04664495528b0284538b8f60
|
|
am: ab33b153b4
Change-Id: I777085a822a84cdd4c94df0ffa0d2de058ab0176
|
|
Bug: 33166666
Test: gerrit uploader
Change-Id: I6c36b8e1c927160b7770f65c0fc1b10517a314aa
|
|
expansion. am: 759026fed9 am: bf51fc27ed
am: b96806f4a5
Change-Id: I30a0794c0230cb5ae0d633166e1eb13695b59d90
|
|
expansion. am: 759026fed9
am: bf51fc27ed
Change-Id: I37f7b06ffc8cf5d31a84b6d9eb04cd862c98c548
|
|
am: 759026fed9
Change-Id: I181a8edb6255056e318a5c1ccedee35dc24b329b
|
|
Test: Rely on ART (linux-bionic build target) postsubmit testing.
Bug: 31559095
Change-Id: Ie911abd8ca173b231c03730c326de7777b97452c
|
|
am: 2f36dd205c
am: 6a39aeaf1f
Change-Id: I5c3578922de01ff383aa618d194a26215fb179f1
|
|
am: 2f36dd205c
Change-Id: I07e5326733d920af74d4b2496b491b4628de4828
|
|
am: 1f8849fffd
Change-Id: I398a8be3f958cccb903379aa9871979dda8424b4
|
|
It was discovered that we were building some objects inconsistently due
to an optimization in cc_library to only build objects once and use them
for both the static and shared libraries. But static libraries didn't
get system_shared_libs set automatically, and we didn't notice that we
would have built the objects differently.
So static libraries now get the default system_shared_libs, we allow
adjusting that for static vs shared in a cc_library, and we disable the
optimization if the linked libraries are configured differently between
static and shared in a single cc_library.
This triggers dependency cycles for static libraries that libc/libdl
use, so fix those cycles here.
Test: treehugger
Change-Id: I75cd76db2366179c7e38578210db728e6181442c
|
|
cd4da0a323
am: bbdecd75a8
Change-Id: I9f6cb4246e3acd1bf581df3904614ba79f374061
|
|
am: cd4da0a323
Change-Id: I7038703ccfbea3b01f82f9d6edd69ea59496073d
|
|
am: 13715ae41a
Change-Id: Ic5a384917f9ffa5ff72f3b243aa64bdb01f34283
|
|
Bug: 120032857
Test: Passes unit tests.
Test: Ran dumpsys -t 300 meminfo --unreachable -a without crashes.
Change-Id: I3d784ed2b449970966403bed7d701e2ff7434fba
|
|
am: a2b4c8c34a
Change-Id: I133d7fd3a2efd48137c52482b13418c10473c318
|
|
am: 7693773220
Change-Id: Ia1f9ba21ab7a255f2e6013c8ea7d611d9afb7dd1
|
|
am: 08ccc19933
Change-Id: Idd0442c57e11d5f7497c8d87f2d80f560a3fdbd4
|
|
The lstats part of the arena structure was not being countered at all,
so added that counting for mallinfo.
Bug: 119580449
Test: New unit tests pass, without changes, the test fails.
Change-Id: I97b231f9189a79f0ce0f55fe6c4cc00266ca75ac
|
|
am: a2f0181050
Change-Id: I4cad40f6f2347b5e831d6d0ddd3bc4203cd90aa7
|
|
am: a1f7a3bacc
Change-Id: I8082c1358c87a0092ed6e4c30739104a4c47676a
|
|
am: 75569e30c5
Change-Id: Ia8eb69bbc660bfe8e5e55a7b1abb0803fa850bec
|
|
Change the minimum size of a map to make sure that the entropy of
allocations is no worse than jemalloc4. At a future point, we should see
how increasing entropy affects performance.
Test: All unit tests pass.
Change-Id: I644f4fc84fa4ad80ce37fecbe48accbd6cb1034e
|
|
am: 3bff253f72
Change-Id: I46c6e93a753024d4b9f31df3c4220d0fd5be21b1
|
|
am: e1ad3f0e8b
Change-Id: I00da735a1c6105f4a0320c779c844dc39c41d4e0
|
|
am: c6954b2064
Change-Id: I014598f8c8bd19958546dc7ea7dc9e114faabe6c
|
|
Add support for svelte.
Add je_iterate support.
Update some of the internals so that bad pointers in je_iterate do not
crash.
Test: Ran new bionic unit tests, ran libmemunreachable tests, booted system.
Change-Id: I04171cf88df16d8dc2c2ebb60327e58b915b9d83
|