记一次杯具与恢复的全过程(安卓修改屏幕分辨率)

我手里有一个三星的安卓平板(SM-P601),分辨率是 2560×1600,像素密度 320,经常被某些傻X APP 认为不是移动设备。今天刚发现可以通过一个 wm 命令来临时修改分辨率,于是手贱改了个 640×360,然后悲剧发生了:因为我是在 Terminal Emulator 里 Root 修改的,没有开 ADB,然后屏幕小到看不全,也没法打 wm reset,重启也不管用。借助万能的谷歌,终于坎坷的恢复回去了。下面记录一下各种坑……

第一关:怎么启用 ADB?

在网上搜了一下,果然有蛋疼的老外跟我一样蠢,在没开 ADB 的情况下修改了分辨率,还改不回去了,例如 XDA 的这位仁兄,然后他找出了如何通过 Recovery 强制启用 ADB 的方法:

我用的是 TWRP,首先长按电源键重启,并按住 Home + 音量加进入 Recovery(不同设备组合键也不一样)。然后在 Mount 中挂载 System,再进入 Advanced – Terminal,并使用 vi /system/build.prop 加入以下行:

如果不会使用 vi 或者觉得麻烦,可以用 File Manager 把 /system/build.prop 拷到 /sdcard,然后利用 TWRP 的 MTP 把它复制到电脑上修改,然后再拷贝回去。

重启系统后,电脑上使用 adb devices 可以看到设备了,然而后面显示 unauthorized,无法进入 adb shell……于是进入第二关。(如果你的设备版本较低无需 ADB 授权,或以前授权过这台电脑,可以直接跳到最后。)

第二关:怎么授权 ADB?

还是没办法进行屏幕操作。一般连上 ADB 之后都会有个“是否允许 USB 调试”的提示,点允许就授权好了。于是又放狗一搜,果然也有 StackOverflow 的老外解决了这个问题。

首先找到你电脑上的 adbkey.pub 公钥,位置可能在 $ANDROID_SDK_HOME 环境变量对应的位置(如果安装过 SDK)、C:\Users\用户名\.android(Windows 默认)或 ~/.android(其它系统默认)。找到之后,改名为 adb_keys,并利用 TWRP 的 MTP 传输到手机的 /sdcard(也就是内部存储)根目录。

然后回到 TWRP 的 Advanced – File Manager,找到 /sdcard/adb_keys 并复制到 /data/misc/adb 目录下。重启手机即可连接 ADB。

如何正确地修改分辨率?

在系统中进入 adb shell,在修改分辨率时要保证屏幕是关闭(锁屏)状态,所以最好不要在 Terminal Emulator 命令行中修改。

要恢复默认的分辨率,根据系统版本不同对应的命令是:

  • am display-size reset(< Android 4.3)
  • wm size reset(≥ Android 4.3)

然后如果要修改分辨率或密度(density),使用的命令是:

  • am display-size 1080x720
  • am display-density 240
  • wm size 1080x720
  • wm density 240

ngx_pagespeed module for Ubuntu’s stock nginx

I recently decided to build ngx_pagespeed for Ubuntu’s stock nginx, since nginx supports dynamic modules (as Apache did long ago) since 1.9.11. Being “dynamic” means that third-party modules don’t have to be built into the main nginx binary, instead can be separately built and dynamically load at runtime, as long as the module and main nginx are compatible, meaning their configuration parameters are similar enough.

(中文版请参见另一篇文章:为 Ubuntu 官方源的 nginx 单独编译 ngx_pagespeed 模块

I have not seen anyone else build any third-party modules separately. nginx.org official repository provides several modules as packages, but I believe they are built along with the binary they shipped. A probable reason is that even though modules built separately can be loaded, they are not guaranteed to work smoothly. I saw that nginx 1.11.5 is trying to improve compatibility of dynamic modules, but whether that works remains to be tested.

I used the latest Debian (Ubuntu) packaging tools and formats to build the package, and Launchpad PPA to host the apt repository.

You can use the apt commands shown on the page to enable the PPA and install the packages. The following is an example:

To use this module, first install the right flavored package, then add the line:
load_module modules/ngx_pagespeed.so;
… to your /etc/nginx/nginx.conf file before reloading nginx.

Please remember that this is an experimental feature, and you should NOT use it in production. More information can be found in the PPA description below.

PPA Description

This repository contains “ngx_pagespeed” dynamic module for Ubuntu’s stock nginx packages, including all flavors available in the official repository.

* WARNING: Building dynamic modules alone is EXPERIMENTAL. It is NOT guaranteed to work by the nginx authors. Even though the module can be loaded and has been tested on my own server, I still don’t recommend using it in PRODUCTION environments. USE AT YOUR OWN RISK. *

The package names are “ngx-pagespeed-nginx-FLAVOR” where FLAVOR is core / light / full / extras, which should match your nginx flavor. Check with:
$ dpkg -l | grep nginx

The versions follow ngx_pagespeed’s latest stable versions and Ubuntu’s REL-updates (e.g. xenial-updates) nginx versions. For example, the first version available in this PPA is built with ngx_pagespeed 1.11.33.4 and xenial-updates’ nginx 1.10.0-0ubuntu0.16.04.4 versions.

To use this module, first install the right flavored package, and then add the line:
load_module modules/ngx_pagespeed.so;
… to your /etc/nginx/nginx.conf file before reloading nginx.

* NOTE: This package is linked against the pre-built “psol” binaries provided by Google, so only i386 and amd64 systems are supported for now. In the future I will update the package to build psol itself. *

 

为 Ubuntu 官方源的 nginx 单独编译 ngx_pagespeed 模块

我最近尝试为 Ubuntu 源里的 nginx 编译 ngx_pagespeed 模块。nginx 从 1.9.11 开始支持动态模块,即第三方模块不再需要编译进 nginx 主程序,而是可以动态加载,条件是编译 nginx 和模块的参数基本相同。

English version: ngx_pagespeed module for Ubuntu’s stock nginx

目前网上好像还没有提供动态模块的先例,只有 nginx.org 官方源中有提供几个模块,但应该也是跟主程序一起编译的。原因可能是这样编译的模块虽然可以加载,但不一定可以稳定运行。据我观察 nginx 在 1.11.5 版本之后有增强模块兼容性的举动,具体是否好用还不好说。

在编译模块软件包的过程中,我使用的都是 Debian (Ubuntu) 的最新编译、打包工具和格式,同时使用 Launchpad PPA 来提供 apt 源。

可以使用上面 PPA 源中的命令来启用源、安装软件包。常用的命令应该是:

使用模块前,先安装对应于 nginx 版本,然后在 /etc/nginx/nginx.conf 配置文件中添加下面一行:
load_module modules/ngx_pagespeed.so;
最后重载(reload)nginx 配置即可。

请注意,单独编译模块是实验性的特性,请不要在生产环境中使用该模块。更多信息请参见的 PPA 的介绍。

(以下是之前文章的备份)

打包 Debian/Ubuntu 软件的感想

最近正在尝试为 Ubuntu 打包一个 .deb 软件,然后看了一遍 Ubuntu 和 Debian 的打包文档,感觉简直是复杂到爆了。同样是全功能的软件包管理器,Arch Linux 的 pacman 只需要一个 PKGBUILD 加一些可选的额外文件(例如安装脚本)就能搞定,deb 系统需要一大堆脚本来辅助“简化”打包工作,学习曲线实在太陡了。经过这一番折腾,也让我明白了平时一个简单的 apt-get 背后有多少人默默的努力……

大概记录一下为一个软件打包、发布到 Launchpad PPA 的全过程。根据具体软件可能有些步骤不同,主要参考文档是 Debian 维护人员手册(有中文)。

  1. 创建目录并下载源代码。通常建议创建一个专门的目录(例如 ~/package),下载源代码 *.tar.gz 文件,并解压缩成单独的源代码目录(~/package/hello-2.10)。
  2. 初始化 debian 目录。在刚才的源代码目录中,使用 dh_make 工具创建 debian 目录的基础结构(~/package/hello-2.10/debian),例如 dh_make -f ../hello-2.10.tar.gz
  3. 可选:将源代码针对 Debian/Ubuntu 进行必要的修改,使用 quilt 工具将修改补丁管理在 debian/patches 目录中。
  4. 按照需要修改 debian 目录中的控制文件,其中最重要的三个是:
    1. control – 管理软件包信息,包括依赖包和介绍等。
    2. rules – 管理软件包配置(configure)和构建(build)的具体步骤。使用的是 GNU Make 的格式,可以根据需要修改特定的 target。这里是水最深的,每个 target 默认调用一些 dh_xxx 的脚本,又可以通过指定 override_dh_* 名称的 target 来覆盖这些脚本的默认操作等等。
    3. changelog – 软件包版本更新信息,构建工具通过它来确定要构建的版本号。
  5. 在源代码目录中执行构造命令。这里有很多种命令可以用,例如 dpkg-buildpackage、debuild、pbuilder 等。大概都是一个高层包装(wrap)另一个底层的关系。
  6. 编译成功后,需要用自己的 PGP 密钥签名 .dsc 和 .changes 文件。debuild 之类的工具可以代劳,也可以手动用 debsign 工具。
  7. 把软件包上传到源。例如 Launchpad PPA 就要求使用 dput 工具上传,而且只传源代码包,它们用自己的服务器编译并发布 deb 二进制包。

简单来说就是这些步骤,其实每一步用到的工具都很复杂,我研究了两天才顺利打好我需要的软件包,准备测试之后再发出来。绝对黑科技,敬请期待。

LeetCode 456: “132” Pattern

这个题是昨晚上回寝室前赶出来的,因为看到是Medium而且感觉不难(实际也不难)。

原题:https://leetcode.com/problems/132-pattern

题目意思是,判断一个数组是否包含子序列ai、aj、ak(i<j<k),满足ai<ak<aj

好像跟一般领奖台是反着的,但是可以直观理解题意。
好像跟一般领奖台是反着的,但是可以直观理解题意。

主算法

思路是遍历每一个可能的k值,然后找到它之前第一个(也就是距离最近的)比它大的数作为aj,然后再判断a0~aj-1之间的最小值是否小于ak

之所以要找到“第一个”比它大的数,理由也很明显:要尽量扩大找ai的范围,否则可能会有false negative的情况。

思路很简单,关键是如何做到O(n)。因为遍历每一个k的过程就是O(n)的,所以要求后面两个步骤都是O(1),也就是要预处理。预处理最小值很简单,开一个数组记录一下即可,空间复杂度O(n)。预处理“之前第一个比ak大的数(的位置)”就比较麻烦了,放在子算法里说,空间复杂度也是O(n)。

因此最终算法分三步:预处理a0~aj的最小值,预处理ak之前第一个大的数,遍历每一个可能的k。

子算法

要预处理每个数之前第一个比它大的数,用到了之前一个stack的奇技淫巧。方法是维护一个pair<数, 位置>的stack,其中数是递减的;对于数组从头到尾遍历,每个数先将栈顶比它小的数pop出去,如果pop完了栈空了,说明前面没有比它大的数,此时预处理结果为-1;否则预处理结果为栈顶数的位置。处理完之后将当前pair<数, 位置>也push进栈。

原理:因为要找到之前第一个比它大的数,如果栈顶的数比当前数还小,那对于它们之后的数来说,当前数更可能是比后面数大的结果,所以将栈顶pop出去。因为每个数最多进出栈一次,时间复杂度O(n)。

C++代码

因为写的很赶,好多边界都没care,其实应该还有能优化的地方(因为运行时间才到beat 55%的位置……)

Magically “defeating” different CDN implementations

In this article I will show some different CDN implementations along with cases where each of them fails to bring the best performance. I am not a researcher in this area, so some of the points are based on my personal experiences.

1. Why people use CDN

Usually when people visit a website, their browsers first query the IP address from their ISP’s recursive DNS server, which in turn query the domain’s authoritative DNS. Then they will be connected to whatever IP address returned by the DNS which is usually distant (geographically far and has high latency) from them.

That is why people use CDN to solve this problem. By putting edge caching servers in different areas of the world (or in the target country) close to end-users, they can speed up the load time of their websites and improve user experience.

2. Traditional Geo-DNS setup

The most common and simple solution is to use a “Geo-DNS” service to point the same domain to different edge servers with different IP addresses (this is important). In this case, when people query the IP address of a domain, the authoritative Geo-DNS can point them to the nearest edge server, based on the users’ IP, their ISPs’ recursive DNS servers’ IP, or users’ IP provided by recursive DNS via EDNS Client Subnet (EDNS0) if supported.

This works fine in an ideal network setup, but it can easily fall apart. Here are some cases that I have come across:

(a) Unicast DNS

Sometimes Geo-DNS providers don’t use Anycast, instead provide Unicast IP addresses for different regions. The recursive DNS has no way of telling which one is closer to them, so it queries a random one, which can result in slow DNS resolution at first visit.

Example: DNSPod and CloudXNS (Popular Unicasted Geo-DNS providers in China)

DNSPod servers use ChinaNet, China Unicom and China Mobile unicasted IP addresses.
DNSPod servers use ChinaNet, China Unicom and China Mobile unicasted IP addresses.

(b) Bad GeoIP database

Some Geo-DNS providers don’t update their GeoIP database frequent enough, or just don’t have enough data.

Example:

(1) Amazon AWS CloudFront and Akamai don’t have servers in China for obvious reasons, but Chinese visitors are not consistently directed to nearest (Hong Kong, South Korea, Japan) locations. Sometimes a query from China can get a response of European locations, which results in ~500 ms latency.

Akamai directs ChinaNet users to Frankfurt, Germany when there are obviously better choices.
Akamai directs ChinaNet users to Frankfurt, Germany when there are obviously better choices.

(2) Some Geo-DNS providers in China, most notably Aliyun DNS. When both “Domestic” and “Global” records are set, they may direct Chinese users to “Global” servers.

(c) DNS different from network exit

Sometimes people may use recursive DNS servers in the network different from their actual network exit.

Example:

(1) In my university, we have mixed network exits, one in CERNET (AS4538) and one in TieTong (AS9394). Our recursive DNS has a CERNET address, so most Geo-DNS providers gives CERNET or (if the website doesn’t have CERNET servers) other networks’ addresses, for instance ChinaNet (AS4134). But our network exit is configured to use TieTong by default, so for most websites we are visiting ChinaNet servers with a TieTong network, even if they also have TieTong servers.

A more extreme case is that, in some networks they have different routing policies for TCP and UDP (which is a violation of OSI model), so when you do DNS query in UDP you have network A’s address, and when you actually connect to TCP port 80 you have network B. Magical? But true.

(2) Sometimes recursive DNS providers and/or Geo-DNS providers don’t support EDNS0. As long as either end doesn’t support it, it will not work. For instance, if user of open recursive DNS service “114DNS” (Anycasted across several Chinese networks) has a network that is not present in 114DNS’ Anycast network, and the authoritative Geo-DNS doesn’t support EDNS0, it will return the IP in the same network of 114DNS’ node, but different from the network of the user.

3. TCP Anycast setup

Some modern CDN providers use TCP Anycast technique, which means they provide a single IP address for their edge servers in multiple locations, and visitors are directed to the nearest location, decided by how they broadcast their routing tables to other networks.

Such providers include CloudFlare and MaxCDN, which use a single Anycasted IP for their edge servers across the planet. Verizon EdgeCast use a slightly different method where they provide several Anycasted IPs, each represent a geographical zone (Asia-Pacific, North America, South America, Europe).

A unified Anycasted IP solves many of the problems mentioned above, and it’s becoming harder and harder to defeat them. But here they comes:

(1) Magical routing policy (again)

Current Chinese IPv6 implementation has only one international exit (AS23911), which has two exit points: a default one in Los Angeles by HE.net (AS6939) and a premium one in Hong Kong (HKIX). When I resolve EdgeCast’s IPv6 address, I get one in 2606:2800:147::/48 network, which is Anycasted in Asia. But when I trace route to this address, the packet goes from China to Los Angeles and back to Asia, resulting in ~400 ms latency. Even if people use an Anycasted recursive DNS (like Google’s), since it has servers in Hong Kong, the result is the same. By querying the domain at OpenDNS (which doesn’t have Asian server) I get the IP in 2606:2800:11f::/48 network, which is Anycasted in North America, and the latency is only ~200 ms (same as the network exit’s).

Tracing route from AS23911 to EdgeCast's IPv6 edge servers in different continents.
Tracing route from AS23911 to EdgeCast’s IPv6 edge servers in different continents.

This only happens with EdgeCast’s “Continent-based Anycast” network. CloudFlare is not affected. But it has another kind of problem.

(2) Artificial routing deterioration

CloudFlare has edge servers everywhere, including Hong Kong, Taipei, Japan, South Korea, etc. which are all very close to Chinese users. But the major Chinese ISPs’ international exit routing policy directs CloudFlare traffic to Los Angeles (ChinaNet) and San Jose (China Unicom), where they are directed to the nearest edge servers in <3 hops. They did the same thing for Softlayer’s Hong Kong locations, for some magical reasons: maybe price, maybe [censored] ;). The latency from both ISP to CloudFlare’s US west locations are 200~300 ms, where with TieTong (which use Hong Kong as international exit) the value is <100 ms.

ChinaNet and Unicom users get 200~300 ms latency while China Mobile (TieTong) users get 80.
ChinaNet and Unicom users get 200~300 ms latency while China Mobile (TieTong) users get 80.

This is obviously not CloudFlare’s fault, because they cannot control the routing policy from another AS to themselves (unless they pay the other system to do so). If your ISP is doing this, switch to another ISP; If the whole country is doing this, maybe switch to another country?

Summary

To sum it up, when you and your customers’ networks don’t have any of the quirks above, a simple Anycasted Geo-DNS solution works fine – you don’t even need a commercial CDN service. But the real networks are hard, and so far a global TCP Anycast solution is the best we can do.