Monthly Archive بهمن ۱۳۹۴


راه اندازی VMware VASA بر روی استوریج IBM StorWize V7000

راه اندازی VMware VASA بر روی استوریج IBM StorWize V7000

The task of the Google search engine questions as in the post title (or its variations) does not return us satisfactory results. It is very difficult to find something in this matter, which step by step will lead us to our goal, which is to configure the IBM Storage Provider for VMware VASA (VMware API’s for Storage Awerness). When a person gets used to the storage profiles (like me) a lack of access to VASA is a discomfort. In the case of IBM Storwize, DS and XIV (updated to the latest firmware) we can use the new software, which is IBM Spectrum Control (Post applies to Base version 2.1.1 for Storwize). This software provides full integration of the IBM arrays with VMware vSphere (5.0 to 6.0), including a plugin for the vSphere Web Client. The whole is fairly simple to set up and run the VMware VASA takes a moment.


At the beginning we need to prepare the VM with RedHat 6, if we have the right template we’re at home, if not then awaits us a little extra work. If we have an active support for our storage, log in to the IBM FixCentral and take off the IBM Spectrum Control Base Edition. After unpacking the archive, in the first step we install all the packages auxiliary (rpm –iv *.rpm) and then run the installer ./ibm_spectrum_control-2.1.1-3435-x86_64.bin (installation takes a moment):


At this point, you can log in to the Spectrum Control at https://IP-FQDN: 8443. Startup user is admin with the password admin1! (if you get the message “Server Error!”, disable SELinux in your RedHat). If we are only interested VASA we need to do three things, one after another from the menu choose “Storage Credentials” and “VASA credentials”.


Storage credentials is a user (with administrative privileges) and password which IBM Spectrum Control will be authorized in all connected IBM arrays. Exactly, for each array must be the same user with the same password.


VASA credentials is a user and password which all connected VMware vCenter will authorize in IBM Spectrum Control.


The next step, in the “Storage Systems” connect our storage (in my case it is our laboratory IBM Storwize V7000):ibm5

We wait a moment, the end result looks like this:


Now log in to the vSphere Web Client, go to the section “vCenter Servers -> Manage -> Storage Providers” and add a connection to the IBM Spectrum Control (VASA Service):


We wait a few minutes and check if everything is ok:


Check on the configured datastore the appropriate storage capability:


At this point, the configuration of VMware VASA have finite. Of course this is not the end of possibilities that gives us the IBM Spectrum Control. First of all, by connecting the VMware vCenter (single or multiple) you can install a plugin to vSphere Web Client gives the ability to create and manage volumes on the storage (the pools are created with the level of Spectrum Control).


We can also connect IBM Spectrum Control to the VMware Orchestrator (comes for him with a plugin), you can also connect Spectrum Control with VMware Operations Manager (supplied pak) and from its level monitor IBM XIV (Storwize is not currently supported).


ریکاوری پسورد EMC Data Domain

ریکاوری پسورد  EMC Data Domain

Disclaimer: This is not a secret procedure, all the information is available on EMC Powerlink. But remember, in a normal situation, this procedure performs the EMC support. You do it at your own risk.

This post is a synthesis of knowledge from the Internet. The entire procedure lasts one minute and is completely safe. However, it requires the input mode SE (system engineer) and later to the Data Domain Bash. The main requirement is to also have an active user, other than the sysadmin, with administrator privileges. If you do not have such user that you must turn for help to EMC Support (sorry).

DD image

We start from the log in via SSH to Data Domain, then enter the command sequence:

system show serialno

priv set se

(Password is SN)





fi st






At this point, we are in the bash as the root user and we can, using the passwd command, to change your sysadmin password. The whole procedure looks like this:


This is all.


VMware Tools 10.0.0 برای ویندوز 10

VMware Tools 10.0.0 برای ویندوز 10

On 09/03/2015 VMware released VMware Tools 10.0.0 (without much publicity). This version is fully compatible with vSphere 5.x and 6.0, at this moment it should bedownloaded yourself. The most important change is the addition of support for Windows 10. After installing in Windows 10 a new version of VMware Tools system is properly recognized in vCenter, there are also new drivers, including the graphics card!



I recall that the latest VMware Horizon View 6.2 also now supports Windows 10. The VMware Tools 10.0.0 can now freely to test Windows 10 in Horizon View 6.2 without any compatibility problem.


راه اندازی VMware VASA بر روی استوریج EMC VNX5200

راه اندازی VMware VASA بر روی استوریج EMC VNX5200

در ابتدا به اطلاع کلیه مهندسینی که قصد نصب و پیکربندی VMware VASA بر روی vSphere 5.5 برای استوریج VNX5200 و یا بقیه مدلهای VNX2 را دارند می رساند که این کار غیر ممکن می باشد و وقت خودشان را تلف نکنند.

این موضوع به دلیل وجود باگ در این ورژن می باشد که خود شرکت VMware نیز به آن اشاره نموده است البته این مشکل در ورژن vSphere 6.0 وجود ندارد و عزیزانی که از ورژن 6 استفاده می کنند بدون هیچ مشکلی می توانند نصب و پیکربندی نمایند. اما در صورتی که از نسخه vSphere 5.5 استفاده می کنید باید برای حل این مشکل یک سیستم عاملی مانند ویندوز یا لینوکس به عنوان EMC SMI-S Agent نصب کرده و آن را به دستگاه VNX وصل کنیم و بعد از آن می توانیم آن را بر روی vCenter ثبت کنیم.


مراحل کار بسیار ساده می باشد، ابتدا باید آخرین نسخه SMI-S Provider که شامل ورژن های 4.6 و می باشند را از وبسایت دانلود می کنیم و سپس در طول مراحل نصب بر روی Array Provider کلیک می کنیم.

بر روی SYMAPI نیز کلیک می کنیم.


از دستور TestSmiProvider.exe برای ارتباط با SMI-S Agent استفاده می کنیم:


دستگاه استوریج VNX را در SMI-S ثبت می کنیم:


از دستور dv برای چک کردن شناسایی دستگاه استفاده می کنیم. شایان ذکر است که می توان بیش از یک دستگاه را در اینجا Register نمود.


در مرحله پایانی ما یک SMI-S Agent در vCenter به عنوان Storage Provider اضافه می کنیم. همانطور که در تصویر می بینید فقط یک Array در vCenter اضافه شده است. همچنین شما می توانید از یک Agent برای چندین vCenter استفاده کنید.


شما می توانید از پرفایل استوریج خود را مشاهده و استفاده نمایید.

گروه فنی و مهندسی وی سنتر ارائه دهنده کلیه راهکارهای ذخیره سازی و بکاپ EMC آمادگی خود را برای تامین نیازهای مشتریان اعلام می دارد.

شماره تماس: 88884268 – 021



پیکربندی EMC MirrorView

پیکربندی EMC MirrorView

By building their own Disaster Recovery solutions often reach for solutions based on data replication between storage arrays. One such solution (let us add that the cheapest) is EMC MirrorView. It is a very simple and easy to set up service fully cooperating with VMware Site Recovery Manager (SRM). LUN replication can be done synchronously or asynchronously, in the framework of assimilation theory and terminology refer you to theStorageFreak blog where colleague Tomek exactly everything described. We will focus on MirrorView configuration directly on the VNX arrays, in my case are VNX 5200 and VNX 5300.


As part of preparations create connection through SAN between arrays. We combine ports described as MirroView, Port A-0 SPA in the first array to the port A-0 SPA on second array (and correspondingly SPB). Ports which will take place in replication can not be used in the hosts Storage Groups. If you are used these ports to communicate with the hosts, remove them from the Storage Group before connecting arrays (otherwise awaits us restart SP controllers and a lot of nasty messages).


After the storage connected, verify if seen correctly, go to the section Hosts -> Initiators.

VNX 5200: mi1

VNX 5300:mi2

As you can see, the connection is set up correctly. To be able to perform Mirror operations, both arrays must know about yourself, be in the same domain or in two different Domains (local and remote).


This operation is carried out with a newer storage or higher-numbered firmware, in my case from the VNX 5200 I add VNX 5300 (the other way it will not work).


At this point, I have in VNX 5200 two domains, Local and Remote, for VNX 5300 is only the Local domain.


From the VNX 5200 can be managed simultaneously by both arrays seamlessly switching between them at the Unisphere client level.


Next, if you have not already have, we will create LUN for “write intent logs’. This log will help in reversing the array of problems that might occur during replication (something like transaction log). Sam LUN does not have to be big, the minimum requirement is 128GB, but we can not create it as part of Pool, this must be a RAID group. Additionally, these logs must be two, one for each SP. Under Storage-> Storage Configurations-> RAID Groups create two new groups and create new LUN.


Now under the Data Protection click on the “Configure Mirror Write Intent Log” and add our LUN. Write Intent Log is not necessary for replication, if you do not have spare disks from which we could create RAID group we can skip this step (its existence, however, increases safety).


Then we create a Reserved LUN Pool, RLP is used in the snapshots and to present the VMFS to ESXi during testing SRM. They are also necessary for asynchronous replication. Same LUN does not have to be big (this is dependent on the amount of changes in production volumes which to postpone between successive copy steps in asynchronous copy). I created three 512GB LUNs ( can not be Thin). Add them in the Data Protection-> Reserved LUN Pool.


Using VMware SRM can make switching in both directions, so a similar set of LUNs create for the second storage.


Now we move to set up replicas, create new LUN (or choose one) and from the menu choose “Create Remote Mirror”.


Depending on the distance, select whether it be a copy of synchronous (delay of no more than 10ms) or asynchronous (delay of no more than 200ms).


And so forth for each LUN. Now we go to the remote array and proceed to configure (create a LUN). After this operation, we return to the first array and check the LUN Mirrors if everything is ok (Active).


Select the LUN and click “Add Secondary”, previously prepared LUN on the remote array must be the same size as the source and can not be assigned to any Storage Groups.


At this point, we have defined mirror image of our volume (enable synchronization).


If you have more volumes that are subject to synchronization and additionally, these volumes will act as a single vSphere DRS cluster, you might want to combine these into one Mirror Consistency Group.


This ensures that all synchronized operations will be carried out simultaneously on all LUNs.


In addition, Consistency Groups translates directly into VMware SRM Protection Group. At this stage, the configuration MirrorView has been completed, the case described herein relates to replicate in one direction. It is also possible replication in both parties (Bi-Directional), the configuration is very similar. Of course, in the case of Bi-Directional talking about the replication of two different LUN sets of each array of one replicated to the second array (we have then the two active DC replicated to the other sites).


EMC Avamar – Backup EMC Celerra (VNX Unified) via NDMP

EMC Avamar – Backup EMC Celerra (VNX Unified) via NDMP

Using in our environment EMC Celerra (NAS) and CIFS/NFS shares, sooner or later you will need to backup data held there. Such solutions can be archived using the Network Data Management Protocol (NDMP). The author of this protocol is NetApp, after a series of purchases its owner became EMC, however this protocol is available to all and now use it all the companies involved in archiving. We’ll meet him at IBM Tivoli, EMC Avamar, EMC NetWorker, Symantec NetBackup, Exec, and many others and in the open source like Bacula and Amanda. Described by me below configuration is based on Avamar Server 6.1 (Virtual) and VNX for File 7.1. The configuration for Avamar Server 7 and VNX for File 8.1 is exactly the same, EMC Avamar 7 additionally supports the EMC Isilon backup via NDMP. Topology of the entire solution is the case:


Of course, Avamar NDMP Accelerator is nothing but a Linux server installed RedHat 4 distribution (or compatible) with the installed packages AvamarClient-Linux and AvamarNDMP-Linux. These packages can be downloaded by going to https://avamar_server and in the section “Documents & Downloads” looking for NDMP position.


RedHat 4 can be installed as a minimum system, we need only SSH. If you plan to perform simultaneous copies you need to configure the server with 8GB of RAM, otherwise 2GB is enough. Install packages by command: rpm -ivh package.rpm. Before configure NDMP, we need to log into the EMC VNX Control Station as nasadmin, do “su root” and create ndmp user, the command looks like this:

/nas/sbin/server_user server_2 -add -md5 -passwd ndmp


The NDMP service on EMC Celerra is enabled by default. To make copies simultaneously, we need to spend extra command on Control Station:

server_param server_2 –f NDMP –m snapTimeout –v 30

Next return to the RedHat console and issue the command avsetupndmp, sequentially giving the necessary parameters ::



Avamar account name is nothing but the name under which the Avamar client is visible under Avamar server. If everything goes correctly, we issue the following command:



Put Avamar server name, and the domain where register the client (defaults is “clients”). If everything goes correctly, in the Avamar Server GUI we can see NDMP client and all active shares.


And all this, in a further step proceed with NDMP as with any other client. Performing backup to EMC Celerra snapshot is executed (function SnapSure) of a given file system. If in a given file pool has no free space we can meet up with a “NDMP: SnapSure file system creation fails.” message.


Verification we can carry out in the Control Station command line, make fs_ckp (create manual snapshot):


As you can see, out of space, now go to Control Station GUI and increase the amount of space in the file pool.


نصب EMC ScaleIO 1.32 در vSphere 6

نصب EMC ScaleIO 1.32 در vSphere 6

At a recent EMC World announced that EMC goes towards Open Source. As a result of this new strategy, products such as ScaleIO and ViPR has been transferred to the community. At this time, we can free ourselves install and test both solutions. What really is ScaleIO? It is a universal solution that can work under Linux, Windows and VMware and is designed for large environments. It offers tremendous scalability and performance, if you believe EMC, with a proper number of hosts is the highest performing software defined storage in the world (Converged Server SAN). On this blog installation and configuration of EMC ViPRhas already been described. In this article I will deal with installing ScaleIO in our test cluster of three HP DL380 servers (local disk) operating under the control of vCenter 6.0. The entire procedure can also be successfully carried out using any of disks (eg. ISCSI) or in Nested ESXi type environment.


The installation will start from registering in the vSphere Web client ScaleIO plugin. The documentation for ScaleIO missing one piece of information, before we can proceed to install, we need to set the JAVA_HOME variable in the system (at this stage is absolutely crucial). Requires 64-bit Java version 6 upwards. Then run the vSphere PowerCLI, go to the directory where the installation script is and run it. When you choose to install, in the background Tomcat server is started and from it is downloaded and installed the ScaleIO plugin. Whole procedure looks like this:


After registering plugin you must log out of vSphere Web Client and log on again (only this step will trigger the installation of the plugin). Before we do, do not press ENTER in the script (in accordance with clear information given on the screen.) When we make sure that the plugin is installed, proceed to the next stage, which is load SVM template.


The ScaleIO VM load procedure is as such:




The next step is to install the ScaleIO Data Client (SDC) driver on each of ESXi. It is supplied as a standard VIB and can be installed manually by vSphere Update Manager or directly from the ScaleIO plug-in. After entering the plug-in, choose the option “Install the SDC on the ESX” and choose our hosts. And now attention, SDC to work properly requires VMkernel on network in which syncs data with other SDC (at this stage is not necessary to add VMK).




After installing the SDC each host must be restarted. Therefore, I suggest using the Update Manager. At this point, we are ready to install ScaleIO environment. But before you run the appropriate wizard, let’s go to advanced settings and Let us now these two options:


The preferred method of access to local drives on ESXi by ScaleIO VM (SVM) is RDM. If this is not possible (eg. Our RAID controller not support RDM) on the local disk is created VMFS and on it VMDK (Eager zeroed, his creation takes a long time, remember that!) which is connected to the SVM machine. If you do not select the above option, in the ScaleIO installation wizard discs (which not support RDM) will be not  available. Before the next step (it will appear in the next blog post due to the large number of screens), a brief digression on what is installed on each host.


ScaleIO Virtual Machine (SVM) appliance is the services carrier. On each host will be at least one SVM, however, every one will meet a slightly different role. The primary role is a ScaleIO Data Manger (MDM) component. It is a metadata management service and operates in a cluster consisting of three nodes – Primary MDM, Secondary MDM and Tie-Breaker. Tie-Breaker is a witness intended to prevent a split brain error when there is a breakdown in our cluster. Another service is ScaleIO Data Server (SDS) managing capacity and access to data through ScaleIO Data Client (SDC). The last service is ScaleIO Gateway Installation Manager (IM), installed as a separate SVM machine. IM service checks the entire environment before performing operations such as expansion or upgrade.


نصب و پیکربندی EMC Isilon Simulator

نصب و پیکربندی EMC Isilon Simulator

New times and new requirements for storage space used by the company have brought new solutions. One of these solutions is EMC Isilon. In a huge simplification, we can say that it is the simple NAS. But it is not just disk array, the heart of it all is OneFS. The architecture is based on a clustered solution (minimum starting number of units is three nodes), OneFS provides full automation of configuration (cluster initialization takes a few minutes), and the distribution of data across all nodes. There are several advantages of such a solution, the main advantage is of course no single point of failure. Node fails? Connect the new, the rest happens by itself. Another advantage is scalability, which is virtually unlimited, we connect another node to the cluster and thereby increase the available space (automatically, no configuration required). Added to this deduplication, compression, data protection and several other services. One should also mention the huge performance of this solution. Subsequent nodes not only increase the available space but also increase performance many times (by manufacturer: explosive growth in performance and capacity). And the last advantage, API. OneFS provides a REST API through which all manipulations on files can be accelerated several times. EMC also provides a full-featured Isilon simulator! The simulator is a bad word, it is a fully functional Isilon, only virtualized. Its performance be little smaller. I strongly encourage to the testing, EMC Isilon can be downloaded here (requires EMC account, this is version for VMware Workstation/Player) or directly from me (version 7.1.1, file OVA). I write here about the simulator, but the hardware configuration of the cluster looks almost the same!


The emulator has a one possibility in relation to the hardware version, you can install one node version (but we will install three). At the beginning, deploy Isilon appliance (ova) or import the machine to the VMware Player (VNX file). Run, wait, and start to respond to the questions:




We will not need SupportIQ, we do not have the support from EMC to the emulator:


Configure the internal network (int-a), after which nodes communicate (without gates), addressing any:


Configure the external network, these addresses will be available from outside with a management service (all at the same time, as in a cluster). Isilon does not distinguish between addresses in the first or second node, they are all equivalent:isi10

Configure our DNS servers:


We choose how to connect the new nodes, in case of a hardware cluster, this happens by Inifiband bus (automatically), here we add nodes manually:


Then we set up a date and time zone, here do not do anything,  enter a valid value on the web interface. Now configure the SmartConnect service address, this is an important point and  does not miss it (although it can be further defined with the web interface). This address will communicate with EMC external services, such as. ViPR.


Summary of our configuration, type yes and look forward to the end of the cluster configuration:


This message indicates that everything is OK. We should also be able to ping the cluster and service addressess. Go to the web-based console (in my case



As the next nodes pull configuration from first, before we add them, Let’s finish the basic configuration. First of all, set the time correctly (and NTP server):


If we have our own Active Directory, we can immediately add the cluster to the AD. This allows us to export the network shares in accordance with the privileges of the AD:


A management interface itself is very simple (an OneFS advantage), comes down to the manipulation of pairs of privileges and shares. We create the Access Zone with precisely defined privileges (local, AD, NIS, etc.) and combine them with shares (eg. SMB or NFS). File system space is one, we have no influence on it. Now we can add another node, the procedure is similar, after deploy appliance and running it, select 2:isi30

New node detects our cluster and attaches under it:


The whole procedure takes a few minutes. In the same way, we add a third node and as result we have a properly configured cluster.


Finally, a few words about the performance. In the case of virtual deployment is dependent on where they will be sited (virtual or physical ESXi) and what drives are connected. At the moment we are at the stage of prepare physical ESXi test cluster with plenty of internal drives. Once we have everything prepared, I will try to perform the appropriate tests and post a few charts to exercise a virtual Isilon built on decent hardware. EMC Isilon hardware cluster has phenomenal performance, the following graphs made from a synthetic test and meter. 8k files record:


When reading we come to 900Mb/s. The test was performed on a virtual machine sitting on an NFS share. Without any advanced philosophy and optimization! I draw attention to a minimal CPU load.


Deploy single large file:


Chart (maybe a bit garbled) with many hours of testing on the reading and writing of 50000 files with variable size. The data given in MB/s (test made witch our own software):


Conclusions are two, on the EMC Isilon lies the power! Almost 300MB/s on plain, clustered NAS. The second conclusion is that it is possible to clogged EMC Isilon quite a bit (but the average is still very good). The graph made on demo cluster from EMC consisting of three nodes. Now think what will happen at 9, 18, 36 nodes…


نصب و پیکربندی EMC ViPR for dammies

نصب و پیکربندی EMC ViPR for dammies

On EMC ViPR and ViPR SRM we could listen on a recently EMC Forum 2014 conference in Warsaw. It is quite a new solution from EMC, showing exactly how the company sees the software-defined storage. What really is ViPR? In the huge simplification, this is a storage management (monitoring and reporting) system. At some stage replaces the storage management consoles (eg. Isilon, VNX, and others) providing a consistent interface between users requirements (for space) with the function of the management and distribution of space for those users (eg, individual or bussines groups). What ViPR is not? It is not a storage virtualizer (please keep this in mind). The same solution is very cheap, for $ 10k you get a full set of license for a huge number of Terabyte. Fortunately, we do not need to buy EMC ViPR for testing purposes, they can be downloaded directly from EMC(version for three or five nodes) with unlimited time license (for 300T). Or traditionally,from me (single-node version). As the ViPR appliance is quite demanding with respect to RAM (installed with 16GB but works fine on 8GB), a one-node version for test is enough. In this post I will show you how to take the first steps in ViPR so we do not get lost. This will help you get an idea of what lies behind this solution. In test we  use as a backend earlier installations of EMC Isilon and EMC VNX for File.


The first stage is a standard appliance OVA upload, providing all the necessary parameters.


Log in to web interface as root/ChangeMe, after logging in (and change the password) you must upload the license file (ViPR_Controller_License.lic). After this step, we have to wait for a while:


Then, go to the “Settings -> Configuration properties -> Upgrade” and enter the address of our proxy (if you have) and EMC account through which we will authorize in the EMC:


Now go to the section “Settings -> Upgrade” and install the latest version of ViPR:


After the upgrade, which takes quite quickly, we can proceed to set up a whole environment so as to be able to export the disk resources. We start by going to the “Security-> Authentication Providers” where we add the “user source”, in 99% of cases this will be an Active Directory domain (if you omit this step, we will have to do everything as root):


Now go to the “Tenant Settings -> Tenants” and create our first tenant. In the same place, map users who can access it, attributes can be any (compatible with AD). For example, this may be the name of the AD group. Added user has basic rights, can log into ViPR and apply for resources (and nothing else).


Create and map the tenant roles. Mapping the role is nothing else that grant of the relevant group (or user) from Active Directory high level permissions in tenant:


Next we create our first project in which resources will be allocated, in this case we will do it as root. Or you login in a tenant and prepare a project within its own structure (object added as root are available globally):


Tenant issues, projects and relevant permissions have terminated. Time to go to configure “physical” resources and translate them into “virtual”. Go to the section “Physical Assets -> Storage Systems” and add our storage systems:



Every time storage added, ViPR inspect the system, analyze and automatically adds available pools:


For file services, it is worth remembering that the “port” should match the FQDN of the storage issuing the service. The figure below is equal to the port name called SmartZone with Isilon. If at this point will be traveling (eg. name of the SmartZone or VNX interface do not correspond to the DNS names of devices), it will be not possible to export volumes via NFS. (End up with a message: “Operation failed, diagnostic report: Unable to resolve hostname ‘xxx’.” on vCenter side):


The next step is to create our first virtual disk array and give acces to it to our tenants:


Next, go to the section “Physical Assets -> Networks” and add a new “network”. The network is nothing else, that the point of joining the “Storage Ports” in the groups that we can assign to the virtual array. In this way, we can quite freely mix access to data resources for specific tenants (eg. Business groups).


In this momment we configure File resources, so we go to the “Virtual Assets -> File Virtual Pools” and add a new pool. This resource must be assigned to a specific virtual array:


In the hardware, available resources are prompted automatically (our Isilon) in accordance with selected settings (if poorly “Networks” configured, we do not go this step). At this point, we have a fully configured and equipped virtual disk array:


It remains for us to add vCenter to which we export volumns via NFS. This step is not mandatory, we might as well define single hosts (Windows, Linux, whatever). ViPR will recognize the full configuration of vCenter, including information about the hosts and clusters.


Now we have everything that we need to prepare our first file system and export it to our vCenter. We can do this as root, but of course it would be better to do it as a tenant member. A new resource conclude in the “Service Catalog -> View Catalog -> File Services for VMware vCenter.” Depending on how you make the privileges, resource get immediately or we will have to wait for the approval of our request.



These two steps are completed as the root user, thus added vCenter is available globally for all tenants. But it is enough relogin to the user who manages the tenant and see that it can add another vCenter themselves (vCenter can not be duplicated).


And that’s basically all when it comes to ViPR controller operation logic. Very encourage you to test this solution, in the next post I will try to show the principle of the EMC ViPR SRM operation. And finally, answer the question “is it worth it?” Initially, the idea itself seemed quite strange, but the longer I use it, the more I find that it is sensational. EMC ViPR does not replace the administrative panel of the storage, you can not set anything complex in the ViPR. However, replaces and simplifies all operations on volumes (create, export). The more complex storage environment is, the more useful is the ViPR !


Error: ViPR VM lost network.

Reason: You move ViPR to another vApp, do not this, the VM settings are get from vApp settings.


نصب و پیکربندی EMC VNX for File Simulator

نصب و پیکربندی EMC VNX for File Simulator

Often we do not have access to the specific vendor hardware. Storage are very expensive toys and do not appear, just like that, in home labs. Fortunately, companies such as EMC provide simulators. VNX simulator, for example. Of course, this product is devoid of block parts (FC) but it is a fully functional file-part (including the full management interface). Completely free. What does that give to us? Ability to check in practice as it looks professional disk array and the possibility to test several solutions. Replication between arrays (this requires a two simulators) for example. And of course, satisfy curiosity as always.



In this post I will show you how to install and configure the EMC VNX for File (in the next post will be about installing EMC Isilon and ViPR). The simulator is available for free (with a licenses to support NFS, CIFS, Snapshot, File Replicator and Retention) on EMC pages, or can be downloaded directly from me. Import machine to vSphere (or VMware Workstation / Player) is a standard process:


VNX version is 8.1 which is for the moment the newest:


We can also download version 7.1.6 (the version that is installed in VNX5300 arrays) with one controller (less demanding). The machine has two interfaces and needs two IP addresses, if you have your own DNS server, preferably immediately create appropriate records A / PTR (to facilitate us swapping SSL certificates). Obviously this is not a step required for proper operation. These two addresses is for management, you will also need at least one IP address for running services.


When you start the machine, log in to it as root/nasadmin via console:


And configure the network by typing “netconfig -d eth0” (and eth1):


When configuring interfaces is end, issue the command ifup eth0 (and eth1). In the file /etc/sysconfig/network We can disable ipv6:


At this stage we can log off (wait a few minutes) and go to the management interface. Connect via web browser to the address of the first interface (in my case:, if everything is ok, we see this:


We run Unisphere and review the configuration (username and password is root / nasadmin). Unisphere needs Java and security settings to medium (Control Panel -> Java). To use our simulator, we have to activate all (or selected) licenses, we do it in the “Settings -> Manage Licenses”:


The simulator in this form is ready and you can play with it. If we want to use it for more serious tests (remember, performance will not be the biggest) we need to make three more steps. In the first step we stop array and add its 2GB of RAM (in order to have a minimum of 6GB RAM), without this, secondary controller (data_mover3) will be inoperate (message “slot_empty”). In the second step, we move one of two controllers (data_movers) for the second, configured by us earlier, network interface (/opt/blackbird/tools) via command configure_nic:


And the last step, restart storage by “reboot” command. After the restart we can verify the new configuration:



The storage is configured with two pools tagged as economy (about 20GB) and performance (about 90GB) which should be sufficient for testing completely. I personally installed the two VNX simulators, do I need to test EMC VIPR which I will write in another post. Holders of its own CA who want to replace the SSL certificates which shows the Control Station (Unisphere) may look here. Prepare the appropriate DataStore can be done in two ways. The first method, traditional, boring and passe: we’re create on the array side and connect it to the ESXi traditionally. As it is the first of my posts about EMC products, I describe here the other way. In this way, in a software defined storage world, we use EMC VSI for vSphere. It is a plugin for vCenter Web Client installed as a virtual appliance and providing integrated support for EMC product (available for free). Installation is very simple (deploy ova), after installing, configuring and starting the appliance, log in to the administrative interface and integrate VSI with our vCenter (https: //IP_appliance:8443/vsi_usm/admin   Username: admin / ChangeMe):


In the next step, go to the vSphere Web Client and in the “vCenter-> EMC VSI-> Solutions Integration Services” connect your VSI:


As you can see, in return will use the same login used in connecting vCenter to VSI, you need Administrator privileges to the vCenter for this login. The effect of integration:


Now in the Storage Systems add your virtual VNX array:


The integration proceeds without problem:


And finally we can create our new datastore for the virtual VNX arrays from the menu available by clicking on the level of a cluster or a single server:


Parameters of the sample datastore:


This method of creating datastores is much more efficient and faster than the traditional model. VNX for File work very well, I highly recommend testing them. Installation instructions  can also be downloaded from here, it is also written on how to extend disk space. If there are any questions I will be glad to answer them.

EDIT 2015.03.16 @Sly:

Boring and old-fashioned method looks like this:

1. Create a new network interface, address given above are intended only to manage VNX. This interface is mandatory to serve CIFS/NFS:


2. Create a new filesystem:


3. Create NFS export:


4. Mount new datastore:


5.  Enjoy 🙂