Lenovo ThinkSystem & System x Data Recovery
ThinkSystem SR650/SR630/SR550, System x3650/x3550, ThinkSystem DE series, Lenovo DM NAS — RAID 930/940, ServeRAID — Lenovo server specialists
ThinkSystem SR650/SR630/SR550, System x3650/x3550, ThinkSystem DE series, Lenovo DM NAS — RAID 930/940, ServeRAID — Lenovo server specialists
Lenovo ThinkSystem servers and legacy System x (inherited from IBM) are the foundation of critical infrastructure in thousands of Spanish businesses. Their RAID architecture based on Broadcom controllers (RAID 930/940) or ServeRAID introduces specific failure modes that require specialised experience to recover without additional losses.
ThinkSystem RAID 930-8i/16i and 940-8i/16i controllers store critical RAID metadata (DDF or MegaRAID config) on both the controller and the disks. A controller failure can leave the array inaccessible even if all disks are healthy. Replacing with an identical controller does not always import the configuration automatically.
In servers with RAID 5 (single parity) or RAID 6 (double parity), failure of two or three disks respectively makes the array completely unreadable to the controller. Automatic rebuild after the first failure subjects remaining disks to intensive reading, frequently causing a second failure in disks from the same batch.
A RAID controller or server UEFI firmware update that fails mid-process can corrupt the array configuration table. XClarity may report the array as «Foreign» or «Offline». Importing the foreign configuration without verification can destroy the data.
Rebuilding a RAID 5/6 array on a ThinkSystem SR650 with 10TB+ disks can take over 24 hours. During that time, if a second disk develops a URE (Unrecoverable Read Error) or fails mechanically, the rebuild aborts and the array becomes inaccessible. With high-capacity disks, the statistical probability of URE during rebuild exceeds 50%.
RAID 930/940 controllers include a cache module with battery (CacheVault supercapacitor). If the battery fails during a power outage, pending write-back cache data is lost, causing file system inconsistencies that can corrupt the entire RAID.
⚠ These mistakes turn a recoverable situation into total data loss:
Bit-by-bit imaging of each SAS/SATA/NVMe disk with DeepSpar Disk Imager. We work on images, never on original disks. We natively support 12Gbps SAS and U.2 NVMe disks.
Reading the MegaRAID/DDF configuration stored on each disk. Determining stripe size, RAID level, disk order and data offset. Cross-verification with XOR parity patterns.
Offline reconstruction of the RAID 5/6/10/50/60 on the cloned images. Regeneration of missing data via parity calculation. Mounting of the file system (NTFS, ext4, XFS, VMFS).
Data delivered on an external drive with recovery report and file listing. Hash integrity verification included. You only pay if we recover your data.
Three options tailored to your urgency and budget
| Model | RAID Controller | Common Failures |
|---|---|---|
| ThinkSystem SR650 V2/V3 | RAID 940-8i/16i | Multiple SAS disk failure during rebuild, write-back cache corruption, backplane failure |
| ThinkSystem SR630 V2/V3 | RAID 940-8i | Compact 1U server: disk overheating, controller failure due to temperature |
| ThinkSystem SR550 | RAID 930-8i | Degraded RAID 5, URE during rebuild, CacheVault battery failure |
| System x3650 M5 | ServeRAID M5210 | IBM legacy model: obsolete ServeRAID controller, ageing SAS disks, unsupported firmware |
| System x3550 M5 | ServeRAID M1215 | 1U rack server: limited space, 2.5" SAS disks, backplane failure, degraded RAID 1 |
| ThinkSystem DE2000/DE4000/DE6000 | Integrated (dual controller) | Storage arrays: dual controller failure, volume group corruption, shelf expansion failure |
| Lenovo DM3000/DM5000/DM7000 | ONTAP (NetApp-based) | Enterprise NAS based on ONTAP: WAFL corruption, aggregate offline, disk shelf failure |
| Service | Description | Timeframe | Price |
|---|---|---|---|
| Logical | RAID configuration corruption, damaged file system, lost partition, inaccessible VMFS | 4–12 days | 400–800€ |
| Physical | Mechanical SAS/SATA disk failure(s), clean room intervention + RAID reconstruction | 7–15 days | 800–1800€ |
| Multi-disk (+) | Arrays of 6+ disks, RAID 50/60, multiple virtual drives on the same controller | 10–20 days | +300€ |
| Urgent | Maximum priority, extended working days. Ideal for production servers. | 24–72h | +50% |
In theory, yes, since MegaRAID metadata is stored both on the controller and on the disks (DDF format). Installing an identical new controller should detect the array as «Foreign Configuration» and allow you to import it. However, if the original controller corrupted the metadata before failing, the import may fail or, worse, initialise the disks. We always recommend cloning the disks before attempting the import.
RAID 5 with two failed disks means there are sectors without available parity. However, if one of the disks has only partial bad sectors (not a total failure), we can recover most of the data. The process involves cloning the disks with adaptive reading (DeepSpar), virtually reconstructing the array and using parity to regenerate data from the missing sectors. Typical recovery rate in these cases is 85-98%.
Lenovo XClarity is the server management platform (equivalent to Dell's iDRAC or HP's iLO). XClarity itself does not affect data stored in the RAID, but its event logs are very valuable for diagnosis: they record exactly when and why each disk failed, whether there were cache errors, power outages or temperature issues. If you can access XClarity, export the logs before sending the equipment.
Yes. Many ThinkSystem SR650/SR630 run VMware ESXi with VMFS datastores on hardware RAID. We recover both at the VMFS datastore level (complete .vmdk files) and at the file system level within the VM (NTFS, ext4, etc.). If the VMFS is corrupt but the RAID is intact, we can extract the VMs directly. If the RAID has also failed, we first reconstruct the array and then access the VMs.
The main difference is the RAID controller. System x3650 M4/M5 use ServeRAID controllers (based on older LSI MegaRAID), while ThinkSystem uses RAID 930/940 (modern Broadcom MegaRAID). Both store metadata in a compatible format, but the diagnostic tools differ: StorCLI for the new ones, MegaCLI for the legacy ones. The recovery process is equivalent but requires generation-specific tools.
Yes. ThinkSystem DE2000/DE4000/DE6000 cabinets (based on NetApp E-Series) use dual active/active controllers with RAID 1/5/6/DDP. If one controller fails, the other takes over. If both fail or the volume group is corrupted, we extract the disks and reconstruct the array offline. The DDP (Dynamic Disk Pool) metadata format from NetApp E-Series is complex but fully recoverable with our tools.
Yes, with proper precautions. Server SAS disks are more robust than consumer SATA drives, but must be shipped in anti-static packaging with shock protection. We offer free pickup with specialised packaging across Spain. Disks travel insured with tracking numbers. We also accept the complete server if you prefer not to remove the disks.
Urgent pickup across Spain. Lab operational including weekends for urgent cases.
Do not attempt a rebuild, do not import configurations. Shut down the server and contact us.
Practical guides, news and tips to protect your data. No spam.
Stay updated