Skip to content
Snippets Groups Projects
Commit 2b208647 authored by Sven Eckelmann's avatar Sven Eckelmann Committed by Andreas Ziegler
Browse files

ar71xx-generic: Reduce SquashFS blocksize to 64K (#1455)

Some 32 MB devices like the Nanostation M2 suffer from sudden high loads
combined with a squashfs related OOM reboot:

  logd invoked oom-killer: gfp_mask=0x2420848, order=0, oom_score_adj=0
  CPU: 0 PID: 774 Comm: logd Not tainted 4.4.135 #0
  Stack : 804214dc 00000000 00000001 80480000 8182fa3c 80474803 804028d0 00000306
          804e378c 00001ade 00000040 00000000 00000000 800a7f10 00000006 00000000
          00000000 00000000 804063e0 80c69994 804e6542 800a5e8c 02420848 00000000
          00000001 801fd600 00000000 00000000 00000000 00000000 00000000 00000000
          00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
          ...
  Call Trace:
  [<800721cc>] show_stack+0x54/0x88
  [<800d5468>] dump_header.isra.4+0x48/0x130
  [<800d5c38>] check_panic_on_oom+0x48/0x84
  [<800d5d64>] out_of_memory+0xf0/0x324
  [<800d9888>] __alloc_pages_nodemask+0x6b8/0x724
  [<800d2960>] pagecache_get_page+0x154/0x270
  [<80134cb0>] __getblk_slow+0x15c/0x374
  [<80160418>] squashfs_read_data+0x1c8/0x6e8
  [<80164628>] squashfs_readpage_block+0x32c/0x4d8
  [<801622a4>] squashfs_readpage+0x5bc/0x6d0
  [<800dd030>] __do_page_cache_readahead+0x1f8/0x264
  [<800d479c>] filemap_fault+0x1a8/0x458
  [<800efc1c>] __do_fault+0x64/0xd0
  [<800f2824>] handle_mm_fault+0x4a4/0xb40
  [<80076e98>] __do_page_fault+0x134/0x470
  [<80060820>] ret_from_exception+0x0/0x10

Reduction of the SquashFS blocksize should reduce the problem slightly on
these nodes. The image size will increase slightly but this should be no
problem for ar71xx-generic (in contrast to ar71xx-tiny).
parent 728d1ffd
No related branches found
No related tags found
No related merge requests found
config 'CONFIG_GLUON_SPECIALIZE_KERNEL=y'
config 'CONFIG_TARGET_SQUASHFS_BLOCK_SIZE=64'
ATH10K_PACKAGES=
ATH10K_PACKAGES_QCA9887=
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment