Quantcast
Channel: Symantec Connect - Storage Foundation - Discussions
Viewing all 272 articles
Browse latest View live

unable to add new disk on diskgroups copied by array based copying

$
0
0
I need a solution

i have some disks copied via array base solution

so after the disks are fully sync-ed. the master and slave relation are split. the disks are imported on the destination host.

vxdisk list didn;t show any udid_mismatch or clone_disk 

but when i add a new empty disk, i couldn' add it into the diskgroup. i forgot the exact error but it was related to clone_disk. i deported the diskgroup, updated udid and set clone=off on all the disks. import the diskgroup again and i was able to add a new disk to the disk group. i am on 5.1SP1RP2 at least.

now is this a bug  which have been fixed already? or it is udid_mismatch and clone needs to be turned off everytime we use the array base copying? 

 

thanks in advance. sorry for not able to show any errors.


VxVM VVR vradmin ERROR V-5-52-431

$
0
0
I need a solution

Environment

OS = rhel 6.2

SFHA/DR = 6.0

GCO Configured

Node 1  (Primary Site)

 

]# vxprint
Disk group: DG

TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg DG           DG           -        -        -        -        -       -

dm DG01         sde          -        20802640 -        -        -       -
dm DG02         sdb          -        20802640 -        -        -       -
dm DG03         sdd          -        41767456 -        -        -       -

rv DG-RVG       -            ENABLED  -        -        CLEAN    -       -
rl rlk_192.168.253.32_DG-RVG DG-RVG ENABLED -  -        ACTIVE   -       -
v  DATA-VOL     DG-RVG       ENABLED  16777216 -        ACTIVE   -       -
pl DATA-VOL-01  DATA-VOL     ENABLED  16777216 -        ACTIVE   -       -
sd DG02-01      DATA-VOL-01  ENABLED  16777216 0        -        -       -
pl DATA-VOL-02  DATA-VOL     ENABLED  LOGONLY  -        ACTIVE   -       -
sd DG02-02      DATA-VOL-02  ENABLED  288      LOG      -        -       -
pl DATA-VOL-03  DATA-VOL     ENABLED  LOGONLY  -        ACTIVE   -       -
sd DG01-01      DATA-VOL-03  ENABLED  288      LOG      -        -       -
dc DATA-VOL_dco DATA-VOL     -        -        -        -        -       -
v  DATA-VOL_dcl gen          ENABLED  67840    -        ACTIVE   -       -
pl DATA-VOL_dcl-01 DATA-VOL_dcl ENABLED 67840  -        ACTIVE   -       -
sd DG02-03      DATA-VOL_dcl-01 ENABLED 67840  0        -        -       -
pl DATA-VOL_dcl-02 DATA-VOL_dcl ENABLED 67840  -        ACTIVE   -       -
sd DG01-03      DATA-VOL_dcl-02 ENABLED 67840  0        -        -       -
v  DATA-VOL-SRL DG-RVG       ENABLED  16777216 SRL      ACTIVE   -       -
pl DATA-VOL-SRL-01 DATA-VOL-SRL ENABLED 16777216 -      ACTIVE   -       -
sd DG01-02      DATA-VOL-SRL-01 ENABLED 16777216 0      -        -       -

co vvrcacheobj  -            ENABLED  -        -        ACTIVE   -       -
v  cachevol     vvrcacheobj  ENABLED  10485760 -        ACTIVE   -       -
pl cachevol-01  cachevol     ENABLED  10485760 -        ACTIVE   -       -
sd DG03-01      cachevol-01  ENABLED  10485760 0        -        -       -
 

 

Node2 (DR Site)

 

# vxprint
Disk group: DG

TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg DG           DG           -        -        -        -        -       -

dm DG01         sdb          -        20802640 -        -        -       -
dm DG02         sdc          -        20802640 -        -        -       -
dm DG03         sdd          -        20802640 -        -        -       -

rv DG-RVG       -            ENABLED  -        -        ACTIVE   -       -
rl rlk_192.168.253.31_DG-RVG DG-RVG ENABLED -  -        PAUSE    -       -
v  DATA-VOL     DG-RVG       ENABLED  16777216 -        ACTIVE   -       -
pl DATA-VOL-01  DATA-VOL     ENABLED  16777216 -        ACTIVE   -       -
sd DG01-01      DATA-VOL-01  ENABLED  16777216 0        -        -       -
pl DATA-VOL-02  DATA-VOL     ENABLED  LOGONLY  -        ACTIVE   -       -
sd DG01-02      DATA-VOL-02  ENABLED  288      LOG      -        -       -
pl DATA-VOL-03  DATA-VOL     ENABLED  LOGONLY  -        ACTIVE   -       -
sd DG02-01      DATA-VOL-03  ENABLED  288      LOG      -        -       -
dc DATA-VOL_dco DATA-VOL     -        -        -        -        -       -
v  DATA-VOL_dcl gen          ENABLED  67840    -        ACTIVE   -       -
pl DATA-VOL_dcl-01 DATA-VOL_dcl ENABLED 67840  -        ACTIVE   -       -
sd DG01-03      DATA-VOL_dcl-01 ENABLED 67840  0        -        -       -
pl DATA-VOL_dcl-02 DATA-VOL_dcl ENABLED 67840  -        ACTIVE   -       -
sd DG02-03      DATA-VOL_dcl-02 ENABLED 67840  0        -        -       -
v  DATA-VOL-SRL DG-RVG       ENABLED  16777216 SRL      ACTIVE   -       -
pl DATA-VOL-SRL-01 DATA-VOL-SRL ENABLED 16777216 -      ACTIVE   -       -
sd DG02-02      DATA-VOL-SRL-01 ENABLED 16777216 0      -        -       -

co vvrcacheobj  -            ENABLED  -        -        ACTIVE   -       -
v  cachevol     vvrcacheobj  ENABLED  10485760 -        ACTIVE   -       -
pl cachevol-01  cachevol     ENABLED  10485760 -        ACTIVE   -       -
sd DG03-01      cachevol-01  ENABLED  10485760 0        -        -       -

 

Problem

[root@node1 ~]# vradmin -g DG verifydata DG-RVG 192.168.253.32 cache=vvrcacheobj
Message from Primary:
VxVM VVR vradmin ERROR V-5-52-431 Secondary 192.168.253.32 not in RDS.

 

 

8102141
1355895776

VMwareDisks error

$
0
0
I need a solution

I installed SVS 6.0.1 on my server and update VRTSvcsag to version 6.0.2. Then i configured VMwareDisks resource in main.cf.

VMwareDisks VMwareDisks1 (
                ESXDetails = { "10.172.117.95" = "root=ISIuJWlWLwPOiWKuM" }
                DiskPaths = {
                         "[95_storage] rhel5104/rhel5104_1.vmdk" = "0:1",
                         "[95_storage] rhel5104/rhel5104_2.vmdk" = "0:2",
                         "[95_storage] rhel5104/rhel5104_3.vmdk" = "0:3" }
                )

 

But after had started, the VMwareDisks resource is not probed with the error message:

Dec 28 01:19:42 rhel5104 AgentFramework[16962]: VCS ERROR V-16-10061-22521 VMwareDisks:VMwareDisks1:monitor:Incorrect configuration: The disk '[95_storage] rhel5104/rhel5104_3.vmdk' has incorrect RDM configuration.

--------------------------------------------------------------------------------------------

The Disk UUID is automatically updated in main.cf when hastart:

        VMwareDisks VMwareDisks1 (
                ESXDetails = { "10.172.117.95" = "root=ISIuJWlWLwPOiWKuM" }
                DiskPaths = {
                         "6000C291-229e-4704-719c-2a66b8f21ad8:[95_storage] rhel5104/rhel5104_1.vmdk" = "0:1",
                         "6000C29a-82d7-d365-a09e-68d37698afd9:[95_storage] rhel5104/rhel5104_2.vmdk" = "0:2",
                         "6000C29d-0bd2-901d-5b95-24934b43144e:[95_storage] rhel5104/rhel5104_3.vmdk" = "0:3" }
                )

Mounting Vxfs 5.0 File system on AIX automatically

$
0
0
I need a solution

Hi,

I need to mount several vxfs file system automatically  in AIX 6.1 so that all vxfs file system will be avaliable after reboot.

 

Please provide me complete procedure to mount VxFs file system in AIX so that all fille system mount automatically abter reboot.

 

Regards

Pradeep

 

8161651
1358162966

Switch port Description scripts

$
0
0
I do not need a solution (just sharing information)

I would like to know the how to assign the Description on each port (particular) with the help of scripts.

Right now i am doing through manually in Device Manager.Select the particuler port and adding description

How do i add through scripts,so if i run the script on switch that will refelect on each port description.

For Ex - abc server having 8 fibers abc (SP1) abc (SP1)
abc (SA1)
abc (SP2)
abc (SA2)

Thanks,

 

 

Fencing with iSCSI disks

$
0
0
I need a solution

I have my test cluster configured with iSCSI disks. I tried to configure fencing with iSCSI as well with no success.
The vxfentsthdw utility failing to test the disk always on the same test for all three disks I tried.
Could it has something to do with speed of my iSCSI server (freenas running as VM) or there are some configuration tricks to make iSCSI disks work as fencing devices. I did not have any problems with iSCSI LUNs for application. They are slow but working.
Please find below vxfentsthdw output:

redhat23:root ~ # /opt/VRTSvcs/vxfen/bin/vxfentsthdw
...
Do you still want to continue : [y/n] (default: n) y
The logfile generated for vxfentsthdw is /var/VRTSvcs/log/vxfen/vxfentsthdw.log.17367

Enter the first node of the cluster:
redhat23
Enter the second node of the cluster:
redhat24

Enter the disk name to be checked for SCSI-3 PGR on node redhat23 in the format:
for dmp: /dev/vx/rdmp/sdx
for raw: /dev/sdx
Make sure its the same disk as seen by nodes redhat23 and redhat24
/dev/vx/rdmp/disk_0

Enter the disk name to be checked for SCSI-3 PGR on node redhat24 in the format:
for dmp: /dev/vx/rdmp/sdx
for raw: /dev/sdx
Make sure its the same disk as seen by nodes redhat23 and redhat24
/dev/vx/rdmp/disk_0

***************************************************************************

Testing redhat23 /dev/vx/rdmp/disk_0 redhat24 /dev/vx/rdmp/disk_0

Evaluate the disk before testing ........................ No Pre-existing keys
RegisterIgnoreKeys on disk /dev/vx/rdmp/disk_0 from node redhat23 ...... Passed
Verify registrations for disk /dev/vx/rdmp/disk_0 on node redhat23 ..... Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/disk_0 from node redhat24 ...... Passed
Verify registrations for disk /dev/vx/rdmp/disk_0 on node redhat24 ..... Passed
Unregister keys on disk /dev/vx/rdmp/disk_0 from node redhat23 ......... Passed
Verify registrations for disk /dev/vx/rdmp/disk_0 on node redhat24 ..... Failed

Unregistration test for disk failed on node redhat24.
Unregistration from one node is causing unregistration of keys from the other node.
Disk is not SCSI-3 compliant on node redhat24.
Execute the utility vxfentsthdw again and if failure persists contact
the vendor for support in enabling SCSI-3 persistent reservations

Removing test keys and temporary files, if any...
redhat23:root ~ #

8226481
1358754446

Solaris 11.1 VXVM 6.0.1 'df' cause a panic

$
0
0
I need a solution

Environment:

System Configuration: HP ProLiant BL480c G1

Oracle Solaris 11.1 X86

panic string:  

BAD TRAP: type=e (#pf Page fault) rp=fffffffc816fdb90 addr=0 occurred in module "unix" due to a NULL pointer dereference

Veritas INFO: 

PKGINST:  VRTSvxvm
      NAME:  Binaries for VERITAS Volume Manager by Symantec
  CATEGORY:  system
      ARCH:  i386
   VERSION:  6.0.100.000,REV=08.01.2012.08.52

Stack:

genunix: [ID 655072 kern.notice] fffffffc816fdab0 unix:die+105 ()
genunix: [ID 655072 kern.notice] fffffffc816fdb80 unix:trap+153e ()
genunix: [ID 655072 kern.notice] fffffffc816fdb90 unix:cmntrap+e6 ()
genunix: [ID 655072 kern.notice] fffffffc816fdca0 unix:strncpy+1c ()
genunix: [ID 655072 kern.notice] fffffffc816fdcd0 odm:odmstatvfs+90 ()
genunix: [ID 655072 kern.notice] fffffffc816fdcf0 genunix:fsop_statfs+1a ()
genunix: [ID 655072 kern.notice] fffffffc816fde70 genunix:cstatvfs64_32+42 ()
genunix: [ID 655072 kern.notice] fffffffc816fdec0 genunix:statvfs64_32+69 ()
genunix: [ID 655072 kern.notice] fffffffc816fdf10 unix:brand_sys_sysenter+1dc ()

Messages:

unix: [ID 839527 kern.notice] df:
unix: [ID 753105 kern.notice] #pf Page fault
unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x0
unix: [ID 243837 kern.notice] pid=3965, pc=0xfffffffffb893ff8, sp=0xfffffffc816fdc88, eflags=0x10206
unix: [ID 211416 kern.notice] cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de>
unix: [ID 624947 kern.notice] cr2: 0
unix: [ID 625075 kern.notice] cr3: 59f0a2000
unix: [ID 625715 kern.notice] cr8: c
unix: [ID 100000 kern.notice]
unix: [ID 592667 kern.notice]       rdi: fffffffc816fdd48        rsi:                0             rdx:                f
unix: [ID 592667 kern.notice]       rcx:                1          r8:              e80             r9:                0
unix: [ID 592667 kern.notice]       rax: fffffffc816fdd48       rbx:         fefa3430          rbp: fffffffc816fdca0
unix: [ID 592667 kern.notice]       r10: fffffffffb856d00         r11:                0            r12: fffffffc816fdd00
unix: [ID 592667 kern.notice]       r13: ffffc10012176880    r14:                0            r15: ffffc1002bb09480
unix: [ID 592667 kern.notice]       fsb:                0           gsb: ffffc1000eac8000     ds:               4b
unix: [ID 592667 kern.notice]        es:               4b          fs:                0               gs:              1c3
unix: [ID 592667 kern.notice]       trp:                e            err:                0             rip: fffffffffb893ff8
unix: [ID 592667 kern.notice]        cs:               30           rfl:            10206            rsp: fffffffc816fdc88
unix: [ID 266532 kern.notice]        ss:               38

In preced log of panic I see "odm:odmstatvfs+90". I think this is root of panic, but due in lack of scat and mdb knowlage, I am cannot to investigate this module. When I delete VXVM, there is no panic when I issue the 'df'.

If I can provide more information about this case, please let me know. For now I dont know what additional info to provide.

Core dump is about of 400 MB, which is more than I can attach to this message.

8227221
1358758619

Where will i get Storage foundation basic for Soalris Sparc

$
0
0
I need a solution

Hello,

I tried to download Storage foundation basic for Soarlsi 10 Sparc from Symantec site but no luck. Whenever i click on storate foundation basic for soalris sparc it is downloading 360mb X86 versoin. Could you please provide me the link to download it.

Thanks in advance.

 

Deepu.


vxstat: time since last reset?

$
0
0
I need a solution

Hello,

 

I wonder if there is a way to determine the time since the last reset ('vxstat -r') of vxstat counters for some or all VxVM objects.

Or asked differently: When vxstat gives me data like

[root:/]# vxstat
                      OPERATIONS          BLOCKS           AVG TIME(ms)
TYP NAME              READ     WRITE      READ     WRITE   READ  WRITE
vol export         3423282  38233203 159756362 400100767    5.6   25.4
vol rootvol        8105850  30419736  73677793  65981077    2.8   36.8
vol swapvol        2636206    340972  42179296  55502576    7.4 5597.9
vol var            5190036  12921735 116372454 126819449    5.9   14.1
how can I be sure these counts accumulated since reboot of the server? Without the knowledge if/when the counters were reset the last time, the output of vxstat is of limited use. Instead, I have to reset the counters myself, thereby losing potentially valuable data that has accumulated over a long time.
 
So is there a way to determine the time of the last reset?
 
This example is from a very old VxVM 4.1 on Solaris 10, but the vxstat man page from 5.1 has no info about the time of the last reset either.
 
 
KR
Jochen

Suggestion for updating sfha5.0 MP3RP3 environment

$
0
0
I need a solution

Environment

RHEL = 5.3

SFHA/DR = 5.0 MP3RP3

Primary Site = Two Nodes Cluster

DR Site = One Node Cluster

Query

We are planning to update our existing 5.0 version with the last updated version of 5.0. (After some time we had a plan to go with latest version which may be 6.0 but this need to update our OS as well. So for a short term plan we need to update our sfha 5.0 MP3 RP3 till the last available patch for sfha 5.0 )

- Our understanding is we can run the below highlighted rolling patchsfha 5.0 MP4RP1 directly on sfha 5.0 MP3 RP3

Any quick update will be higly appriciated

vxfs_34.i64243.tar missing from veritas.com website

$
0
0
I need a solution

Hello, I would like to compile VxFS support into lsof.  In the lsof FAQ it says to goto the following website below.  VERITAS is now owned by Symantec, so I would like to ask where we can now find this file.

 

	    ftp://ftp.veritas.com/pub/support/vxfs_34.i64243.tar

Thanks

VxVM vxvol ERROR V-5-1-10128 Configuration daemon error 441

$
0
0
I need a solution

Environment

RHEL 5.3

SFHA 5.0 MP3RP3

Problem

My Replication is in passthru mode. So I just need to diassociate and associate the SRL volume via the below command :

#vxvol -g DG dis SRL-VOLUME

but facing the below problem:

 

vxvol -g PHOENIX dis PHOENIX-U-SRL
VxVM vxvol ERROR V-5-1-10128  Configuration daemon error 441
 

8264961
1359440570

What is the difference between sfha, vm & fs titled rolling patch for SFHA

$
0
0
I need a solution

I have one of the enviorment of SFHA 5.0MP3, I am planning to upgrade 5.0MP3RP3 on RedHat 5.3 64 Bit

while searching for patch of RP3, i found three rolling patch i.e.

sfha-rhel5_x86_64-5.0MP3RP3

vm-rhel5_x86_64-5.0MP3RP3

fs-rhel5_x86_64-5.0MP3RP3

Please guide me the difference between them, further will I have to patch all three of them ?

P.S - I know NEW versions are already avaialble, but i have specific requirement of this version therefore need to perform it urgently.

Please guide me as soon as possible experts out there

Thanks

difference

$
0
0
I need a solution

difference between blocl level and file level storage n simple terms.

 

i have searched google also

Patch missing for some components

$
0
0
I need a solution

My query.

I was updating SFHA 5.0 MP3RP3 with the rolling patch of MP4RP1.

The patch I downloaded did not contain any instalmp or installrp script. There were respective directories in which rpms were available. I had to to rpm -Uvh *.rpm to install them which even led me in the confusion for should i install all the rpms in all the directories or should I install repsective to products I am using.

My usage is, Global Cluster Server and Replication - 2 Node Primary & 1 Node DR

Anyhow what I did was just installed the rpms which were in the directories of Storage Foundation and Veritas Cluster Server, though while I tried to install rpms from Cluster Server directory after installing rpms from Storage Foundation directory I was received with a message of rpms already installed.

Please update me on the above queries where I am leading myself in confusion

Furher, I carry a similar replica enviorment with another client who is already on SFHA 5.0 MP4RP1. I compared the result of rpm -qa | grep VRTS to see if I have missed any of the rpms which should be installed and found 4 Rpms which were of older version in my new enviorment. I manually started finding it with find command in the entire patch directories but was unable to find them. The rpms are

VRTSmapro-common-5.0.3.0-RHEL4
VRTSvcsmg-5.0.40.00-MP4_GENERIC
VRTSvcsmn-5.0.40.00-MP4_GENERIC
VRTSvcsvr-5.0.40.00-MP4_GENERIC

I have similar packages in my enviorment but they are of different versions, kindly inform me where I will be able to find these rpms ? and why were they not available in the 5.0 MP4RP1 rolling patch which I downloaded from Sort

Thanks


Solaris 11.1 uninstall VXVM 6.0.1 in liveboot from CD

$
0
0
I need a solution

I have Solaris 11.1 x86 box with VXVM 6.0.1.

Last my action was try to install patch VXVM 6.0.3 on VXVM 6.0.1.

On preparation stage installer could not stop all modules :

    vxio failed to stop on smc
    vxdmp failed to stop on smc

And installer give suggest reboot system. After trying reboot system I got panic cycle. Single mode is not help.

Now I try to explore possibility to remove VXVM from livecd boot.

Which steps I already made:

1) Boot from sol-11_1-live-x86.iso

2) As root, zpool import -f rpool

3) ....

On step 3 I dont' know what to do next. I trying to chroot into be, which I mount with "beadm mount solaris /a" but I can't delete VXVM with standart methods.

 

Solaris 10u10 Zones and Storage Foundation Version 6.0

$
0
0
I need a solution

I have a Solaris 10u10 system with Veritas Storage Foundation version 6.0.100 and I'm trying to create a non-global zone.

The non-global zone doesn't completely come up because the VRTSvlic package creates a dependancy on the svc:/milestone/multi-user but the the vxfsldlic service is set to disabled.

Does anyone know how to get around this problem?

upgrading from SF 5.1 to 6.0.2

$
0
0
I need a solution

Using the installer script to upgrade from SF 5.1 to SFHA 6.1.  There is no VCS on this server and the installer exits:

Logs are being written to /var/tmp/installer-201302061759ZoK while installer is in progress

    Verifying systems: 25%                         _____________________________________________________________________

    Estimated time remaining: (mm:ss) 0:10                                                                        2 of 8

    Checking system communication ................................................................................. Done
    Checking release compatibility ................................................................................ Done
    Checking installed product CPI ERROR V-9-40-1083 Cannot upgrade  product because it is not installed on your system.
 

I can no longer see an option to install or  upgrade just the SF components without HA, jusat a choice of SFHA or VCS so not sure best solution.

one failing disk from vxprint, but I can run prtvtoc against this disk

$
0
0
I need a solution

bash-2.03# prtvtoc /dev/rdsk/c4t0d58s2
* /dev/rdsk/c4t0d58s2 partition map
*
* Dimensions:
*     512 bytes/sector
*      64 sectors/track
*      60 tracks/cylinder
*    3840 sectors/cylinder
*   36828 cylinders
*   36826 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*           0      3840      3839
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       2      5    01          0 141411840 141411839
       3     15    01       3840      7680     11519
       4     14    01      11520 141400320 141411839
bash-2.03#

bash-2.03# vxprint -htg ictdg
DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL
RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK
V  NAME         RVG          KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
DC NAME         PARENTVOL    LOGVOL
SP NAME         SNAPVOL      DCO

dg ictdg        default      default  6000     1074301281.1224.svan1008

dm ictdg01      c4t0d47s2    sliced   3583     17670720 -
dm ictdg02      c4t0d25s2    sliced   3583     17670720 -
dm ictdg03      c4t0d48s2    sliced   5503     70698240 -
dm ictdg04      c4t0d49s2    sliced   5503     70698240 -
dm ictdg05      c4t0d50s2    sliced   5503     70698240 -
dm ictdg06      c4t0d51s2    sliced   5503     70698240 -
dm ictdg07      c4t0d52s2    sliced   5503     70698240 -
dm ictdg08      c4t0d53s2    sliced   5503     70698240 -
dm ictdg09      c4t0d54s2    sliced   5503     70698240 -
dm ictdg10      c4t0d55s2    sliced   7423     141400320 -
dm ictdg11      c4t0d56s2    sliced   7423     141400320 -
dm ictdg12      c4t0d57s2    sliced   7423     141400320 -
dm ictdg13      c4t0d58s2    sliced   7423     141400320 FAILING
dm ictdg14      c4t0d59s2    sliced   7423     141400320 -
dm ictdg15      c4t0d60s2    sliced   7423     141400320 -
dm ictdg16      c4t0d61s2    sliced   7423     141400320 -
dm ictdg17      c4t0d62s2    sliced   7423     141400320 -
dm ictdg18      c4t0d63s2    sliced   7423     141400320 -
dm ictdg19      c4t0d64s2    sliced   7423     141400320 -
dm ictdg20      c4t0d65s2    sliced   7423     141400320 -
dm ictdg21      c4t0d66s2    sliced   7423     141400320 -
dm ictdg24      c4t0d69s2    sliced   5503     70698240 -

Solaris 11.1 SFHA 6.0.3 - svc:/system/VRTSperl-runonce:default in maintenance

$
0
0
I need a solution

On fresh install of SFHA 6.0.3 on Solaris 11.1 SPARC (per instructions, ie: install 6.0.1 w/out configuration, install 6.0.3, then configure) - the following service is showing in maintenance state:

# svcs -xv
svc:/system/VRTSperl-runonce:default (?)
 State: maintenance since Mon Feb 11 14:11:28 2013
Reason: Start method failed repeatedly, last exited with status 127.
   See: http://support.oracle.com/msg/SMF-8000-KS
   See: /var/svc/log/system-VRTSperl-runonce:default.log
Impact: This service is not running.

Looking at the log, the start method fails as the file it's trying to run ( /opt/VRTSperl/bin/runonce ) is missing / does not exist:

# tail /var/svc/log/system-VRTSperl-runonce:default.log
[ Feb 11 14:11:28 Executing start method ("/opt/VRTSperl/bin/runonce"). ]
/usr/sbin/sh[1]: exec: /opt/VRTSperl/bin/runonce: not found
[ Feb 11 14:11:28 Method "start" exited with status 127. ]
[ Feb 11 14:11:28 Executing start method ("/opt/VRTSperl/bin/runonce"). ]
/usr/sbin/sh[1]: exec: /opt/VRTSperl/bin/runonce: not found
[ Feb 11 14:11:28 Method "start" exited with status 127. ]
[ Feb 11 14:11:28 Executing start method ("/opt/VRTSperl/bin/runonce"). ]
/usr/sbin/sh[1]: exec: /opt/VRTSperl/bin/runonce: not found
[ Feb 11 14:11:28 Method "start" exited with status 127. ]

Has anyone else seen this in SF 6.0.3?

The file name implies it's something that only needs to be run once (and was possibly deleted after it was run) - should this service be disabled/removed as part of the installation so it doesn't come up in maintenance every time?

Please advise if further details are required.

thanks,
Grace

Viewing all 272 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>