Unbrick Yoga tablet

Since my Lenovo Yoga tablet (10″, dunno the model number any more) has a tendency to turn itself into a expensive paper weight after updates, here’s the procedure to unbrick it:

  1. Get a physical Windows 7 PC. Windows > 7 will not do, since it recognizes the tablet and installs the wrong device drivers! You really need Windows 7!
  2. Get the image file and the device drivers. A working image with the file name Yoga_tablet_10_A422_000_040_131023_WW_WIFI.rar is available on tollana or at http://www.lenovo-forums.ru. The driver filename is SP_Flash_Tool_Driver_Auto_Installer_v1.1236.00.7z, btw.
  3. Start the tablet in rescue mode. Press Vol+Down (that is the upper part of the volume switch if it’s standing on the foot) and the power button simultaneously until some Chinese Characters appear and plug it in. You should get unknown devices in the Windows Device Manager.
  4. Install the drivers and don’t be afraid about the warnings and unplug the device once it’s done.
  5. Unpack the firmware image and start the included SP Flash Tool. Under “Options” turn the USB Mode on if applicable.
  6. “Scatter Load” the file “Yoga_tablet_10_A422_000_040_131023_WW_WIFI/target_bin/target_bi
    n/MT6589_Android_scatter_emmc.txt” and wait until Flash Tool is done, showing “Searching” in the lower toolbar.
  7. Click “Download” and ignore the warning.
  8. Select the menu entry containing “eMMC” on the tablet.
  9. Only now plug it back in, wait for Flash Tool to recognize it and download the firmware.
  10. Once Flash Tool is done, unplug the tablet, restart it and wait for the installation to finish.
  11. Install several updates and reconfigure it.

Don’t share, but enjoy!

Disk Change @ Hetzner

1. Get serial of defective disk

# for i in a b; \
 do echo $i ; smartctl -i /dev/sd$i | grep -i serial ;\
done

The disk not returning a serial is most likely the defective one.

2. Create a Ticket @ Hetzner Robot

Log in to the Hetzner Robot and create a Ticket: Left menu:

Anfragen -> Serverprobleme ausklappen -> Festplatte defekt

If the serial of the defective drive can’t be obtained, enter the serial of the working one and stress that the said serial is the working drive. Otherwise they’ll swap the wrong one and you end up with nothing!

3. Copy partition table

Once the drive is replaced, copy the partition table from the working drive to the new one. sgdisk comes to rescue (THINK BEFORE YOU COPY & PASTE, WON’T WORK ANYWAY):

# sgdisk -R=/dev/sd[new] /dev/sd[old]

In doubt, RTFM twice: man sgdisk

4. Resync the RAID

mdadm comes to rescue:

# mdadm --manage /dev/md0 --add /dev/sd[new][part]

If you can’t get the former command to work, again: RTFM! It’s your data!

5. Stare at the progress

# watch cat /proc/mdstat

6. Be delighted!

Installation SSD

On May 27, 2015, I replaced the system raid of hadante (4 spinning 500 GB disks, RAID5) with 4 Samsung SSD drives (also 500 GB, RAID5). It was well worth it. The speed is amazing!

Along with the disks I ordered 4 3.5″ -> 2.5″ installation frames. As it turned out I only needed two, because you can easily stack two SSD drives on one frame with the right frame. There even is a gap in between, so I don’t expect heat problems.

The RAID5-rebuild was blazing fast. Overall, everything seems to be much snappier.

The serial numbers – SSD drives from top to bottom:

  1. S21JNXAG415926
  2. S21JNXAG415880
  3. S21JNXAG433264
  4. S21JNXAG433168

The old serial numbers – spinning drives from top to bottom:

  1. S13TJ1EQ401080 (Samsung)
  2. 5VMJ32Q9 (Seagate)
  3. 3PM23C12 (Seagate)
  4. S13TJ1EQ401081 (Samsung)

 

RAID

RAID-Devices

  • /dev/md0
    am@hadante ~ $ sudo mdadm -Q --detail /dev/md0 
    /dev/md0: 
            Version : 1.2 
      Creation Time : Wed May 27 13:52:56 2015 
         Raid Level : raid5 
         Array Size : 1463976960 (1396.16 GiB 1499.11 GB) 
      Used Dev Size : 487992320 (465.39 GiB 499.70 GB) 
       Raid Devices : 4 
      Total Devices : 4 
        Persistence : Superblock is persistent 
     
      Intent Bitmap : Internal 
     
        Update Time : Sat Jun 10 15:26:28 2017 
              State : clean  
     Active Devices : 4 
    Working Devices : 4 
     Failed Devices : 0 
      Spare Devices : 0 
     
             Layout : left-symmetric 
         Chunk Size : 512K 
     
               Name : archiso:0 
               UUID : 6d795529:03e05967:c4d50bcf:5dd604b7 
             Events : 3136 
     
        Number   Major   Minor   RaidDevice State 
           0       8      130        0      active sync   /dev/sdi2 
           1       8       18        1      active sync   /dev/sdb2 
           2       8        2        2      active sync   /dev/sda2 
           4       8      146        3      active sync   /dev/sdj2
  • /dev/md1
am@hadante ~ $ sudo mdadm -Q --detail /dev/md1 
/dev/md1: 
        Version : 1.2 
  Creation Time : Thu May  2 18:08:00 2013 
     Raid Level : raid5 
     Array Size : 8790405120 (8383.18 GiB 9001.37 GB) 
  Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB) 
   Raid Devices : 4 
  Total Devices : 4 
    Persistence : Superblock is persistent 
 
    Update Time : Sat Jun 10 15:26:58 2017 
          State : clean  
 Active Devices : 4 
Working Devices : 4 
 Failed Devices : 0 
  Spare Devices : 0 
 
         Layout : left-symmetric 
     Chunk Size : 512K 
 
           Name : hadante:1  (local to host hadante) 
           UUID : 0227c805:00c429e6:231a9f6f:168e5a4a 
         Events : 587869 
 
    Number   Major   Minor   RaidDevice State 
       7       8       64        0      active sync   /dev/sde 
       5       8       80        1      active sync   /dev/sdf 
       6       8       96        2      active sync   /dev/sdg 
       4       8      112        3      active sync   /dev/sdh

Order in external casing

From top to bottom:

  1. ST3000DM001-1ER166, Serial W501VAJY
  2. ST3000DM001-1CH166, Serial Z1F58T6T
  3. HGST HDN724040ALE640, Serial P4HU954B
  4. ST3000DM001-1CH166, Serial Z1F1JTDT

Slot History

  1. Original -> replaced by a recertified drive (Z1F2X130 – 2015/04/25) -> replaced by  a defective HGST 4TB that didn’t even start up on 2017/06/09 -> replaced by another SEAGATE 3TB drive in stock the same day (W501VAJY – delivered 2016/10/18)
  2. Original? (no history available)
  3. Original -> replaced by recertified drive (W1F2L184) -> replaced by Z1F142XH on 2016/08/26 (in stock, delivered 2016/06/26), which turned out to be defective and out of warranty (returned to sender, the replacement is in slot 1 now) -> replaced by HGST 4 TB (Serial: P4HU954B) on 2016/09/03. Read the whole story here
  4. Original? (no history available)

Remarks

  • [2017/06/10] I’m not really sure about the history of slot 3. A previous version said that it was slot 4, but the Amazon history says otherwise.
  • [2017/06/10] Despite still having a SEAGATE 3TB drive in stock I ordered a new HGST 4TB on Whitsunday 2016/06/04, delivered as promised on 2016/06/07. Unfortunately I couldn’t get my hands on it until Friday 2016/06/09, because the neighbor accepting the parcel wasn’t around when I was. Not at 3pm, 6pm nor 9pm (Wednesday and Thursday), so I took an early lunch break on Friday at about 11:15am and rode home to pick it up, yay! Much to my dismay the drive was defective. Instead of spinning up it made clicking, scratching sounds and didn’t show up in /dev, so I eventually replaced it with the 3TB SEAGATE drive, because I didn’t trust the still working, but failing drive any more. After a hard crash on Wednesday hadante kicked the (healthy) HGST drive out of the array, so I had to rebuild with the failing drive. About 24 hours later the rebuild finished, lucky me! I’m gonna return the HGST drive to the sender next week instead of voiding my warranty.
# journalctl -k | grep md1 
Jun 09 15:25:31 hadante kernel: md: md1 stopped. 
Jun 09 15:25:31 hadante kernel: md/raid:md1: device sdf operational as raid disk 1 
Jun 09 15:25:31 hadante kernel: md/raid:md1: device sdh operational as raid disk 3 
Jun 09 15:25:31 hadante kernel: md/raid:md1: device sdg operational as raid disk 2 
Jun 09 15:25:31 hadante kernel: md/raid:md1: raid level 5 active with 3 out of 4 devices, algorithm 2 
Jun 09 15:25:31 hadante kernel: md1: detected capacity change from 0 to 9001374842880 
Jun 09 15:27:46 hadante kernel: md: recovery of RAID array md1 
Jun 10 13:44:01 hadante kernel: md: md1: recovery done.