Software-Defined Storage

What is software-defined storage? Software-defined storage is a data storage scheme that all storage-related control functions are implemented in software, removing the software’s dependence on proprietary hardware. It uses standardized hardware (such as x86 architecture) as the carrier to realize enterprise-level storage function and service based on software. Yes, It is not the firmware in a storage device, but a software layer which can ensure that the storage access of the system can be managed more flexibly at a precise level. It is used to control storage request to the physical storage, how and where to store data. Software defined storage is abstracted from hardware storage, which also means that it can become a shared pool that does not care the capacity of the underlying hardware, so as to improve efficiency and reduce costs.   Compared with the traditional storage hardware box, the storage software has become the core of SDS. The rise of SDS stems from the rapid development

Storage trends and products

The primary storage's market is consolidating, leaving only a few vendors as the main players in the market, such as EMC, IBM and HP.  Primary storage's providers will put greater emphasis on the efficiency of the overall stack and financial approaches of subscription like cloud services in the future. Companies like NetApp and Hitachi are expanding their technology stack and moving toward the cloud and big data field, and their primary storage business is not going to have big growth in the future. Hitachi, for example, has shrunk their product lineup and now they actually only have two storage products (VSP 5000 and HCP). They strategically chose to focus on the higher stack, such as IoT, data management, big data analytics and so on, and storage is now critical to their overall strategy. Secondary storage (storage that isn't business-critical) hasn't seen the same kind of consolidation as primary storage, and many startups still have the potential to disrupt the mark

File, Block and Object storage in distributed storage system

At the bottom of the storage system, there are lots of data. The physical storage medias are limited in a single server, the IO performance is also limited, the distributed storage system is used to fix this problem. It is infrastructure that can store data on multiple physical servers, which behave as one storage system although data is distributed between these servers. It typically takes the form of a cluster of storage servers, with a mechanism for data synchronization and coordination between cluster nodes.  Distributed storage system can store 3 types of storage: file, block, and object. The essential difference is the "user" of the data: the user of the block storage is the software system that can read and write to the block device, such as the traditional file system, database; The user of the file storage is a natural person; The user of object storage is other computer software.  File storage The user of the file storage is a natural person. All data are presents b


There are 3 most prevalent storage types that are primarily deployed by enterprises, which are  DAS (Direct-Attached Storage), NAS (Network-Attached Storage) and SAN (Storage Area Networks). DAS DAS is a traditional mass storage, the storage devices are directly/physically attached to the computer through an internal cable, which does not use any network. This is still a most popular approach. It provides block-level access through Small Computer System Interface (SCSI), Serial Advanced Technology Attachment (SATA), or Serial Attached SCSI (SAS), etc. Advantages: Storage device is dedicated with high performace compared to NAS Lower cost,  inexpensive. Simple to configure and use Disadvantages: Inability to share data or unused resources with other servers efficiently. This is addressed by NAS ans SAN but at the risk of security and higher initial cost. Can not manage it by network No high availability Hard to expand storage capacity NAS NAS is mass storage attached to a computer which

Data redundancy mechanism in storage

In engineering, redundancy is the duplication of critical components or functions of a system with the intention of increasing reliability of the system, usually in the form of a backup or fail-safe, or to improve actual system performance. Storage device may also have the problem such as bit malfunction or data loss, which require data redundancy mechanism to protect data. The following solutions are commonly used and valid for most storage devices: Device mirroring (replication) – A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of a same type). The downside is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The upside is possible concurrent read of a same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational,

Four types of data storage structures

There are four types of data storage structures: sequential storage, linked storage, index storage, and hash storage. Sequential and linked structures apply to memory structures. Index and hash structures are suitable for interaction between external memory and memory. Sequential storage: In computer, a group of contiguous storage units are used to store the data elements of a linear table in sequence, called the sequential storage structure of a linear table. Features: Random access table elements; Insertion and deletion operations require moving elements. Linked storage: In computer, data elements of a linear table are stored in an arbitrary set of storage units (which may be contiguous or discontinuous). It does not require logically adjacent elements to be physically adjacent. Therefore, it does not have the weakness of sequential storage structure, but it also loses the advantage of random access of sequential table. Features: Lower storage density compared to sequential storage

Introduction to the basic architecture and operation of Internet Small Computer Systems Interface

 iSCSI is an acronym for Internet Small Computer Systems Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. iSCSI works on top of the Transport Control Protocol (TCP), and provides block-level access between the iSCSI initiator and the storage target by carrying SCSI commands over a TCP/IP network. iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet, and can  enable  location-independent data storage and retrieval. Overview of the overall architecture iSCSI   is a remote mapping technology that maps a storage device on a remote server to the local area and presents it as a block device (which can be thought of as a disk). From the perspective of the normal user, the mapped disk is no different from the locally connected disk. This mapping is based on the SCSI protocol, whi

Thin provisioning helps you improve resource utilization

Thin provisioning provides efficient storage space utilization by allowing the system to present storage devices without occupying any space until data is written into logical volumes. For example, to create a 1TB logical volume, QStora does not allocate disk space immediately when it is created, but occupies the real capacity dynamically according to the written data. While thick provisioning means that the entire space is pre-allocated for the volume before use. Even if no data is written to the disk, creating a 1TB volume will actually take up physical disk space, and the occupied physical storage cannot be used for any other purpose. Thin provisioning gets rid of the upfront capacity reservation, thereby unlocking and freeing the capacity that would otherwise be reserved upfront and trapped within thick provisioned volumes. Thin Provisioning The advantages of thin provisioning include: Allows you to share resources between volumes. Some volumes may not use all or much of what they

What is MPIO and how to use it?

M PIO is an acronym for MultiPath Input Output. This is a framework designed to configure load balancing and failover processes by providing multiple logical paths between storage devices and the server. The logical paths can be created through redundant physical components like buses, controllers, switches or bridge devices. Should one or more controllers, ports or switches fail, the server can route the I/O through an alternate path, the remaining controller, port or switch transparently and with no changes visible to the applications, other than perhaps resulting in increased latency.   Microsoft provides Device Specific Module (DSM) in Server 2008, 2012 and 2016, supports Asymmetric Logical Unit Access (ALUA), and can configure Multipath I/O (MPIO) environment with the storage device conforming to the SCSI Primary Commands (SPC) specification. DSM provides the load balancing policies, and load balance policies are generally dependent on the ALUA of the storage array attached to Win

How to avoid vendor lock-in?

Single-vendor solutions are usually easier to implement and manage, but are also easier to lock in. When you need a new storage system, you may purchase storage arrays based on workload requirements. Traditionally, these systems included hardware and software, and you are locked in to purchase additional hardware for the system from the same vendor. When you need to increase storage capacity, or want to create multiple sites, you need to buy from the same vendor. You are locked into that vendor's technology road map and its price. If you don't want to be locked in by hardware vendors, you can choose software-defined storage. However, can all software-defined storage prevent you from being locked in by vendors? Some software-defined storage products claim to be software-defined, but they depend on specific hardware in the Hardware Compatibility List (HCL), driver compatibility issues often occur, and hardware and software vendors often blame each other. If you store data on the

Are all your IT resources being used efficiently?

As a Chief Information Officer (CIO) or corporate decision maker, you manage the corporate's IT operations and infrastructure. At present, the global economy is not very prosperous. You may not have much budget to buy new equipment, but there are growing storage requirements. You may have some servers for deploying applications, such as database, WEB application, etc., these servers have spare disk slots, and the utilization of CPU and memory may be not very high. So, why not make full use of these servers, fill up the disk slots, and let these servers be used for data storage at the same time? Especially for secondary storage like backups and archives, you don’t need to purchase a separate SAN storage device, you can use software-defined storage to save the TCO. Can all software-defined storage be mixed deployed with existing applications, regardless of the hardware environment and operating system? Of course not. Some software-defined storage products can only be deployed on the

Why did we develop QStora?

We are a team which has the storage experts with more than 10 years' experience.  We have gone from vigorous young people to middle-aged people who have begun to have white hair. However, our enthusiasm for the storage business has not never changed. We have designed a storage product with single cluster storage data exceeding EB and 99.9999% availability per week for two consecutive years. We do not use open source storage, so we have more control over the code. Our product has served many major customers around the world. We are very proud of developing such a popular storage product. We hope that our storage product can be deployed and used by more people, and not only we can operate and maintain such a system. We believe that software-defined storage should be a very simple thing, just like other software, it should be able to be downloaded, installed and used like other software. Therefore, we use our existing storage experience to develop QStora to make it more flexible, easi

Why is QStora a software-defined storage controller

Storage controller  is a device that manages the physical  disk drives  and presents them to the computer as  logical units .  It is also called "storage processor" or "array controller". Storage controller integrates the storage space in multiple storage devices and provides them as a single storage space to the client. When receiving a write data request from the client, the storage controller decides which of the multiple storage devices to allocate the data to, and stores the data in the selected one. When receiving a read request from the client, the storage controller finds the location where the data is stored, reads the data from the storage device and transmits it to the client. The storage controller performs these tasks in an efficient and stable manner. In addition to reading and writing data, the storage controller also implements various value-added functions, including dynamic allocation of volume capacity, snapshots, and so on. QStora is a software-d

QStora helps you make the best use of existing resources

QStora is the only software-defined storage product in the industry that can be mixed with any other applications. This feature will help you greatly improve the utilization of existing resources, greatly save the TCO, and you can easily migrate your QStora services to other servers. QStora is green. QStora runs as a group of user-mode processes, does not rely on any specific version of Linux kernel or distribution, does not rely on, or modify the operating system environment, does not monopolize the entire hard drive, and does not interfere with the execution of any other processes. Thus, QStora can run in the same Linux operating system instance concurrently with other applications. We call this feature "green". On the one hand, it can help users improve the utilization of existing hardware resource, on the other hand, it also lowers the barriers for potential users to try QStora - even a virtual machine is not needed! Some software-defined storage products cannot be instal

QStora Software Architecture

QStora  adopts a three-layer distributed storage architecture,and the block storage service can be managed in a single cluster in a unified manner. QStora Software Architecture As shown in Figure above,the functional architecture of QStora consists of: Prorocol layer: provides standard iSCSI interface for the applications to access the storage system. Service layer: provides block service. Persistence layer: implements persistent storage, provide functions such as Eresure Code, data rebuilding and rebalancing, disk management, data read/write capabilities and so on. Management: operates, manages, and maintains the system, and provides functions such as server management, LUN and Target management, install, upgrade, mointoring and alert reporting. The following table describes the modules in Management Layer. Module Description Server Management Manage servers in the cluster, you can add, delete servers, add and delete the disk paths on server, so as to expand and shrink cluster capacit