233 Star 874 Fork 430

GVPopenEuler / kernel

 / 详情

存储虚拟化kworker NUMA affinity特性同步:openEuler-1.0=LTS 到OLK-5.10

Done
Requirement
Opened this issue  
2021-11-19 16:21

虚拟化kworker NUMA affinity特性同步:openEuler-1.0=LTS 到OLK-5.10

Comments (7)

imxcc created任务

Hi imxcc, welcome to the openEuler Community.
I'm the Bot here serving you. You can find the instructions on how to interact with me at
https://gitee.com/openeuler/community/blob/master/en/sig-infrastructure/command.md.
If you have any questions, please contact the SIG: Kernel, and any of the maintainers: @XieXiuQi , @YangYingliang , @成坚 (CHENG Jian) .

openeuler-ci-bot added
 
sig/Kernel
label
imxcc set assignee to KevinZhu
imxcc assigned collaborator zhengzengkai
imxcc changed title

linux-gvqQox:~/avocado/tests/testcase/hst/hst_maintenance/kworker/hst_kworker_fun_004 # avocado --show test run test.py
Error running method "configure" of plugin "sysinfo": Key sysinfodir already registered in section sysinfo.collect
/usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/plugins/run.py:287: FutureWarning: The following arguments will be changed to boolean soon: sysinfo, output-check, failfast, keep-tmp and ignore-missing-references.
FutureWarning)
[SYSLOG][INFO] Sysinfo configured by file: /etc/avocado/sysinfo/sysinfo.json
Not logging /var/log/evs/evs.log (file does not exist)
Not logging /etc/evs/evs.ini (file does not exist)
Not logging /var/log/netflow.log (file does not exist)
Not logging /var/log/audit/audit.log (file does not exist)
Not logging /tmp/kbox_log.txt (file does not exist)
Not logging /Images/hotreplace/hotreplace.log (file does not exist)
Command line: /usr/local/bin/avocado --show test run test.py

Avocado version: 77.0

Config files read (in order):
/usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/avocado.conf
/usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/conf.d/result_upload.conf
/usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/conf.d/gdb.conf
/usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/conf.d/jobscripts.conf
/usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/conf.d/resultsdb.conf
/usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/conf.d/glib.conf
/etc/avocado/avocado.conf
/etc/avocado/conf.d/uvp_virt.conf
/etc/avocado/conf.d/valgrind.conf
/etc/avocado/conf.d/dhcp.conf
/root/.config/avocado/avocado.conf

Avocado config:
Section.Key Value
datadir.paths.base_dir /root/avocado/tests
datadir.paths.test_dir /root/avocado/tests/testcase
datadir.paths.data_dir /home/uts/data
datadir.paths.logs_dir ~/avocado/job-results
sysinfo.collect.enabled True
sysinfo.collect.commands_timeout -1
sysinfo.collect.installed_packages False
sysinfo.collect.profiler True
sysinfo.collect.locale C
sysinfo.collect.per_test False
sysinfo.collect.virt_conf /etc/avocado/conf.d/uvp_virt.conf
sysinfo.collectibles.commands etc/avocado/sysinfo/commands
sysinfo.collectibles.files etc/avocado/sysinfo/files
sysinfo.collectibles.profilers etc/avocado/sysinfo/profilers
sysinfo.collectibles.sysinfo_json /etc/avocado/sysinfo/sysinfo.json
runner.output.colored False
runner.output.color auto
runner.output.utf8
runner.timeout.after_interrupted 60
runner.timeout.process_died 10
runner.timeout.process_alive 60
remoter.behavior.reject_unknown_hosts False
remoter.behavior.disable_known_hosts False
job.output.loglevel debug
restclient.connection.hostname localhost
restclient.connection.port 9405
restclient.connection.username
restclient.connection.password
plugins.disable []
plugins.skip_broken_plugin_notification []
plugins.loaders ['file', '@DEFAULT']
plugins.resolver.order ['avocado-instrumented', 'python-unittest', 'glib', 'robot', 'exec-test']
gdb.paths.gdb /usr/bin/gdb
gdb.paths.gdbserver /usr/bin/gdbserver
plugins.jobscripts.pre /etc/avocado/scripts/job/pre.d/
plugins.jobscripts.post /etc/avocado/scripts/job/post.d/
plugins.jobscripts.warn_non_existing_dir False
plugins.jobscripts.warn_non_zero_status True
plugins.glib.unsafe False
kafka.kafka_server_list ["9.11.1.148:9092"]
kafka.json_result_topic uts-result-json
kafka.uts_json_version 0.0.1
host1.id 1
host1.ip 9.13.7.183
host1.user root
host1.password Nbhqv46#$
host1.bmc_ip 9.13.8.183
host1.bmc_user Administrator
host1.bmc_password Admin@9000
host2.id 2
host2.ip 169.169.169.2
host2.user root
host2.password Nbhqv46#$
host2.bmc_ip 169.169.169.102
host2.bmc_user Administrator
host2.bmc_password Admin@9000
host3.id 3
host3.ip 169.169.169.3
host3.user root
host3.password Nbhqv46#$
host3.bmc_ip 169.169.169.103
host3.bmc_user Administrator
host3.bmc_password Admin@9000
host4.id 4
host4.ip 169.169.169.4
host4.user root
host4.password Nbhqv46#$
host4.bmc_ip 169.169.169.104
host4.bmc_user Administrator
host4.bmc_password Admin@9000
host5.id 5
host5.ip 169.169.169.5
host5.user root
host5.password Nbhqv46#$
host5.bmc_ip 169.169.169.105
host5.bmc_user Administrator
host5.bmc_password Admin@9000
host6.id 6
host6.ip 169.169.169.6
host6.user root
host6.password Nbhqv46#$
host6.bmc_ip 169.169.169.106
host6.bmc_user Administrator
host6.bmc_password Admin@9000
libvirt.module auto
libvirt.uri auto
libvirt.tls disable
image_server.inuse yes
image_server.start_connect http://IMAGE-SERVER:5000/start_connect
image_server.download_timeout 3600
image_server.max_vm_count 10
image_server.image_tmp_server_in_suzhou 9.163.1.151
volume.fs_dir /Images/TestImg
volume.lvm_dir vhost-img1
volume.mount_point1 /mnt/ocfs2_1
volume.mount_point2 /mnt/ocfs2_2
volume.mount_point3 /mnt/ocfs2_3
volume.mount_point4 /mnt/ocfs2_4
volume.lun0 /dev/mapper/3654511b1002fa2ec8464a5210000011b
volume.lun1 /dev/mapper/3654511b1002fa2ec8464e55c0000014b
volume.lun2 /dev/mapper/3654511b1002fa2ec8464b2680000013a
volume.lun3 /dev/mapper/3654511b1002fa2ec8464ade400000131
volume.lun4 /dev/mapper/3654511b1002fa2ec8464abeb0000012c
volume.lun5 /dev/mapper/3654511b1002fa2ec8464abb40000012b
volume.lun6 /dev/mapper/3654511b1002fa2ec8464e46700000148
volume.lun7 /dev/mapper/3654511b1002fa2ec8464e75400000156
volume.lun8 /dev/mapper/3654511b1002fa2ec8464e6a000000155
volume.lun9 /dev/mapper/3654511b1002fa2ec16598ed9000001ab
volume.lun10 /dev/mapper/3654511b1002fa2ec8464a97b00000125
volume.lun11 /dev/mapper/3654511b1002fa2ec8464b2b60000013b
volume.lun12 /dev/mapper/3654511b1002fa2ec8464e5fb00000153
volume.lun13 /dev/mapper/3654511b1002fa2ec8464b11800000137
volume.lun14 /dev/mapper/3654511b1002fa2ec8464e27000000143
volume.lun15 /dev/mapper/3654511b1002fa2ec8464e1280000013f
volume.lun16 /dev/mapper/3654511b1002fa2ec8464aa7600000128
volume.lun17 /dev/mapper/3654511b1002fa2ec8464a77d00000120
volume.lun18 /dev/mapper/3654511b1002fa2ec8464e2c000000144
volume.lun19 /dev/mapper/3654511b1002fa2ec8464e0300000013d
volume.lun20 /dev/mapper/3654511b1002fa2ec8464b1cb00000138
volume.lun21 /dev/mapper/3654511b1002fa2ec8464a35d00000117
volume.lun22 /dev/mapper/3654511b1002fa2ec8464e50e0000014a
volume.lun23 /dev/mapper/3654511b1002fa2ec8464e3b400000147
volume.lun24 /dev/mapper/3654511b1002fa2ec8464a30800000116
volume.lun25 /dev/mapper/3654511b1002fa2ec8464a5be0000011d
volume.lun26 /dev/mapper/3654511b1002fa2ec0bf5aaf2000000d0
volume.lun27 /dev/mapper/3654511b1002fa2ec8464af5500000134
volume.lun28 /dev/mapper/3654511b1002fa2ec8464ac3a0000012d
volume.lun29 /dev/mapper/3654511b1002fa2ec8464ac910000012e
volume.lun30 /dev/mapper/3654511b1002fa2ec8464a3ad00000118
volume.lun31 /dev/mapper/3654511b1002fa2ec8464af0600000133
volume.lun32 /dev/mapper/3654511b1002fa2ec8464e0740000013e
volume.lun33 /dev/mapper/3654511b1002fa2ec8464a46200000119
volume.lun34 /dev/mapper/3654511b1002fa2ec8464a4b90000011a
volume.lun35 /dev/mapper/3654511b1002fa2ec8464ace10000012f
volume.lun36 /dev/mapper/3654511b1002fa2ec0a90dc3e000001a9
volume.lun37 /dev/mapper/3654511b1002fa2ec8464e18200000140
volume.lun38 /dev/mapper/3654511b1002fa2ec8464b00b00000135
volume.lun39 /dev/mapper/3654511b1002fa2ec8464ab020000012a
volume.lun40 /dev/mapper/3654511b1002fa2ec8464a6780000011e
volume.lun41 /dev/mapper/3654511b1002fa2ec8464a8d700000123
volume.lun42 /dev/mapper/3654511b1002fa2ec8464a18600000115
volume.lun43 /dev/mapper/3654511b1002fa2ec8464e64a00000154
volume.lun44 /dev/mapper/3654511b1002fa2ec8464b05900000136
volume.lun45 /dev/mapper/3654511b1002fa2ec8464e21f00000142
volume.lun46 /dev/mapper/3654511b1002fa2ec8464aacd00000129
volume.lun47 /dev/mapper/3654511b1002fa2ec8464a9d400000126
volume.lun48 /dev/mapper/3654511b1002fa2ec8464a88500000122
volume.lun49 /dev/mapper/3654511b1002fa2ec8464a56f0000011c
volume.lun50 /dev/mapper/3654511b1002fa2ec8464ad3000000130
volume.lun51 /dev/mapper/3654511b1002fa2ec8464a92500000124
volume.lun52 /dev/mapper/3654511b1002fa2ec8464e31600000145
volume.lun53 /dev/mapper/3654511b1002fa2ec8464a72c0000011f
volume.lun54 /dev/mapper/3654511b1002fa2ec8464a7ce00000121
volume.lun55 /dev/mapper/3654511b1002fa2ec8464e5ab00000152
volume.lun56 /dev/mapper/3654511b1002fa2ec8464e4b400000149
volume.lun57 /dev/mapper/3654511b1002fa2ec8464b3700000013c
volume.lun58 /dev/mapper/3654511b1002fa2ec8464aa2600000127
volume.lun59 /dev/mapper/3654511b1002fa2ec8464ae3400000132
volume.lun60 /dev/mapper/3654511b1002fa2ec8464b21a00000139
volume.lun61 /dev/mapper/3654511b1002fa2ec8464e36300000146
volume.lun62 /dev/mapper/3654511b1002fa2ec8464e1d000000141
disk.disk85 scsi:/Images/TestImg/kvm-disk-scsi_085
disk.disk84 scsi:/Images/TestImg/kvm-disk-scsi_084
disk.disk83 scsi:/Images/TestImg/kvm-disk-scsi_083
disk.disk82 scsi:/Images/TestImg/kvm-disk-scsi_082
disk.disk81 scsi:/Images/TestImg/kvm-disk-scsi_081
disk.disk80 scsi:/Images/TestImg/kvm-disk-scsi_080
disk.disk79 scsi:/Images/TestImg/kvm-disk-scsi_079
disk.disk78 scsi:/Images/TestImg/kvm-disk-scsi_078
disk.disk77 scsi:/Images/TestImg/kvm-disk-scsi_077
disk.disk76 scsi:/Images/TestImg/kvm-disk-scsi_076
disk.disk75 scsi:/Images/TestImg/kvm-disk-scsi_075
disk.disk74 scsi:/Images/TestImg/kvm-disk-scsi_074
disk.disk73 scsi:/Images/TestImg/kvm-disk-scsi_073
disk.disk72 scsi:/Images/TestImg/kvm-disk-scsi_072
disk.disk71 scsi:/Images/TestImg/kvm-disk-scsi_071
disk.disk70 scsi:/Images/TestImg/kvm-disk-scsi_070
disk.disk69 scsi:/Images/TestImg/kvm-disk-scsi_069
disk.disk68 scsi:/Images/TestImg/kvm-disk-scsi_068
disk.disk67 scsi:/Images/TestImg/kvm-disk-scsi_067
disk.disk66 scsi:/Images/TestImg/kvm-disk-scsi_066
disk.disk65 scsi:/Images/TestImg/kvm-disk-scsi_065
disk.disk64 scsi:/Images/TestImg/kvm-disk-scsi_064
disk.disk63 scsi:/Images/TestImg/kvm-disk-scsi_063
disk.disk62 scsi:/Images/TestImg/kvm-disk-scsi_062
disk.disk61 scsi:/Images/TestImg/kvm-disk-scsi_061
disk.disk60 scsi:/Images/TestImg/kvm-disk-scsi_060
disk.disk59 scsi:/Images/TestImg/kvm-disk-scsi_059
disk.disk58 scsi:/Images/TestImg/kvm-disk-scsi_058
disk.disk57 scsi:/Images/TestImg/kvm-disk-scsi_057
disk.disk56 scsi:/Images/TestImg/kvm-disk-scsi_056
disk.disk55 scsi:/Images/TestImg/kvm-disk-scsi_055
disk.disk54 scsi:/Images/TestImg/kvm-disk-scsi_054
disk.disk53 scsi:/Images/TestImg/kvm-disk-scsi_053
disk.disk52 scsi:/Images/TestImg/kvm-disk-scsi_052
disk.disk51 scsi:/Images/TestImg/kvm-disk-scsi_051
disk.disk50 scsi:/Images/TestImg/kvm-disk-scsi_050
disk.disk49 scsi:/Images/TestImg/kvm-disk-scsi_049
disk.disk48 scsi:/Images/TestImg/kvm-disk-scsi_048
disk.disk47 scsi:/Images/TestImg/kvm-disk-scsi_047
disk.disk46 scsi:/Images/TestImg/kvm-disk-scsi_046
disk.disk45 scsi:/Images/TestImg/kvm-disk-scsi_045
disk.disk44 scsi:/Images/TestImg/kvm-disk-scsi_044
disk.disk43 scsi:/Images/TestImg/kvm-disk-scsi_043
disk.disk42 scsi:/Images/TestImg/kvm-disk-scsi_042
disk.disk41 scsi:/Images/TestImg/kvm-disk-scsi_041
disk.disk40 scsi:/Images/TestImg/kvm-disk-scsi_040
disk.disk39 scsi:/Images/TestImg/kvm-disk-scsi_039
disk.disk38 scsi:/Images/TestImg/kvm-disk-scsi_038
disk.disk37 scsi:/Images/TestImg/kvm-disk-scsi_037
disk.disk36 scsi:/Images/TestImg/kvm-disk-scsi_036
disk.disk35 scsi:/Images/TestImg/kvm-disk-scsi_035
disk.disk34 scsi:/Images/TestImg/kvm-disk-scsi_034
disk.disk33 scsi:/Images/TestImg/kvm-disk-scsi_033
disk.disk32 scsi:/Images/TestImg/kvm-disk-scsi_032
disk.disk31 scsi:/Images/TestImg/kvm-disk-scsi_031
disk.disk30 scsi:/Images/TestImg/kvm-disk-scsi_030
disk.disk29 scsi:/Images/TestImg/kvm-disk-scsi_029
disk.disk28 scsi:/Images/TestImg/kvm-disk-scsi_028
disk.disk27 scsi:/Images/TestImg/kvm-disk-scsi_027
disk.disk26 scsi:/Images/TestImg/kvm-disk-scsi_026
disk.disk25 scsi:/Images/TestImg/kvm-disk-scsi_025
disk.disk24 scsi:/Images/TestImg/kvm-disk-scsi_024
disk.disk23 scsi:/Images/TestImg/kvm-disk-scsi_023
disk.disk22 scsi:/Images/TestImg/kvm-disk-scsi_022
disk.disk21 scsi:/Images/TestImg/kvm-disk-scsi_021
disk.disk20 scsi:/Images/TestImg/kvm-disk-scsi_020
disk.disk19 scsi:/Images/TestImg/kvm-disk-scsi_019
disk.disk18 scsi:/Images/TestImg/kvm-disk-scsi_018
disk.disk17 scsi:/Images/TestImg/kvm-disk-scsi_017
disk.disk16 scsi:/Images/TestImg/kvm-disk-scsi_016
disk.disk1 scsi:/Images/TestImg/kvm-disk-scsi_001
disk.disk2 scsi:/Images/TestImg/kvm-disk-scsi_002
disk.disk3 scsi:/Images/TestImg/kvm-disk-scsi_003
disk.disk4 scsi:/Images/TestImg/kvm-disk-scsi_004
disk.disk5 scsi:/Images/TestImg/kvm-disk-scsi_005
disk.disk6 scsi:/Images/TestImg/kvm-disk-scsi_006
disk.disk7 scsi:/Images/TestImg/kvm-disk-scsi_007
disk.disk8 scsi:/Images/TestImg/kvm-disk-scsi_008
disk.disk9 scsi:/Images/TestImg/kvm-disk-scsi_009
disk.disk10 scsi:/Images/TestImg/kvm-disk-scsi_010
disk.disk11 scsi:/Images/TestImg/kvm-disk-scsi_011
disk.disk12 scsi:/Images/TestImg/kvm-disk-scsi_012
disk.disk13 scsi:/Images/TestImg/kvm-disk-scsi_013
disk.disk14 scsi:/Images/TestImg/kvm-disk-scsi_014
disk.disk15 scsi:/Images/TestImg/kvm-disk-scsi_015
disk.disk_io_persistent on
disk.disk_io_ring on
disk.fault_inject_disk /dev/sdc
disk.fault_inject_filesystem /Images/TestImg/fault_filesystem
disk.disk_lun_size 2147483648
disk.lun_default_size_num 0
disk.lun_small_size_num 0
vims_cluster.clustername ocfs2cluster
vims_cluster.eth_disk_config eth1
vims_cluster.eth_net_config vlan12
vims_cluster.port 7777
vims_cluster.num1 0
vims_cluster.num2 254
vims_cluster.num3 255
vims_cluster.num4 1999
vims_cluster.key1 0x7d0
vims_cluster.key2 0xfe
vims_cluster.key3 0xff
vims_cluster.key4 0x7cf
vims_cluster.vims_1 /Images/TestImg/vims_1
vims_cluster.vims_2 /Images/TestImg/vims_2
vims_cluster.node1 9.13.7.183
network.ipv6_mode False
network.manage_nic eth0
network.storage_nic eth1
network.manage_net_scheme ovs3
network.master_test_nic None
network.slave_test_nic None
network.master_extend_nic None
network.slave_extend_nic None
network.master_loop_nic None
network.slave_loop_nic None
network.sriov_driver ixgbe
network.sriov_hardware 82599
network.networking_flag false
network.networking_type openvswitch
network.target_dev tap
network.source br0
network.pxe_nic eth1
network.pxe_user Administrator
network.pxe_passwd Admin@9000
network.ipsan_ip1 None
network.ipsan_ip2 None
network.ipsan_ip3 None
network.ipsan_ip4 None
network.ipsana 1 2
network.ipsanb 3 4
guest.id 0
guest.os EulerOS_arm_V2R8SPC300B630
guest.os_type linux
guest.image_path /Images/TestImg/EulerOS_arm_V2R8SPC300B630
guest.image_format scsi
guest.user root
guest.password Huawei123
guest.inside_mode pty
guest.test_dir /tmp/uts
guest.tool_dir /tmp/uts_tools
guest.check_compatibility 0
guest.cfg_path /etc/avocado/vm_cfg
guest.disk_cfg_path /etc/avocado/conf.d/default_disk.xml
guest.mem_snap_path /Images/instance.save
guest.mem_swap_path /memswap
guest.hibernate_path /Images/hibernate.save
guest.hot_patch_ip 9.11.1.45
guest.image_type kvm-raw-backup-None-None
guest.vmtools_version
guest.share_storage local
guest.lun
xml.version arm
xml.xml_path /root/avocado/job-results/latest
xml.random_file /tmp/random_file.json
xml.host_version FSO
xml.hugepages_size 2048
xml.is_sigma False
xml.lvm_disk_path /Images/TestImg/rhel65_64_migrate_user.vhd
vminfo.linux.commands /etc/avocado/vminfo/linux/commands
vminfo.linux.files /etc/avocado/vminfo/linux/files
vminfo.windows.commands /etc/avocado/vminfo/windows/commands
vminfo.windows.files /etc/avocado/vminfo/windows/files
debug.debug True
debug.collect_log True
debug.coredump_dir /Images/core
scene.disk_mode default
scene.compare_existed_vm False
scene.tap_range 25
scene.disk_range 80
scene.delete_remote_vm True
scene.enable_hot_replacement False
scene.fabric_mode process
scene.check_vm_log False
scene.enable_pci_bridge True
scene.scene_branch normal
vmtools.linux_iso /opt/patch/programfiles/vmtools/vmtools-linux.iso
vmtools.windows_iso /opt/patch/programfiles/vmtools/vmtools-windows.iso
vmtools.vmtools_pre_patch /home/uvp-vmtools-2.3.0-127.001.x86_64.rpm
vmtools.vmtools_cur_patch /home/uvp-vmtools-2.3.0-128.001.x86_64.rpm
vmtools.vmtools_next_patch /home/uvp-vmtools-2.3.0-999.001.x86_64.rpm
gpu.m60 m60-1q
gpu.k1 k140q
gpu.k2 k240q
time.ntp_server 9.11.5.169
timeout.timeout_factor 1
timeout.proc_timeout 600
timeout.atom_timeout 600
timeout.remote_timeout 600
timeout.vm_timeout 300
timeout.pty_timeout 300
fileserver.remote_ipaddr 9.11.1.248:8080
fileserver.local_archive_dir /tmp/
fileserver.release_url http://10.175.100.158/version_release/UVP/EulerOS_Virtual_V200R010_VERSION_dragon/EulerOS-Virtual-V200R010C00SPC300B030-2021-11-19-00-06-10
ci_info.ci_json_file /Images/HUTAF_log/deploy.json
gcov.gcov_enabled False
gcov.gcov_dir /opt/gcov
gcov.gcov_config_path /opt/gcov/config.sh
gcov.gcov_package_list /opt/gcov/gcov_package.list
https.https_server 9.16.0.10:443
dns.dns_server 10.98.48.39
valgrind.modules libvirt, qemu
9.11.ip 9.11.0.10
9.11.usr root
9.11.passwd icpci
9.31.ip 9.31.1.49
9.31.usr root
9.31.passwd loveuvp
9.51.ip 9.51.3.53
9.51.usr root
9.51.passwd icpci
9.71.ip 9.71.2.2
9.71.usr root
9.71.passwd Huawei12#$
9.16.ip 9.16.0.10
9.16.usr root
9.16.passwd icpci
9.13.ip 9.13.0.10
9.13.usr root
9.13.passwd icpci
9.22.ip 9.11.0.10
9.22.usr root
9.22.passwd icpci
9.85.ip 9.85.3.8
9.85.usr root
9.85.passwd Huawei12#$
8.1.ip 8.1.1.170
8.1.usr root
8.1.passwd icpci
9.47.ip 9.71.2.2
9.47.usr root
9.47.passwd Huawei12#$
9.121.ip 9.121.3.19
9.121.usr root
9.121.passwd huawei
9.163.ip 9.163.0.6
9.163.usr root
9.163.passwd Huawei12#$

Avocado Data Directories:

base /root/avocado/tests
tests /root/avocado/tests/testcase
data /home/uts/data
logs /root/avocado/job-results/job-2021-11-19T10.10-b79e960

No variants available, using defaults only

Variant : /
Temporary dir: /var/tmp/avocado_8e7jmsn_/avocado_job_xiv0c3e9

Job ID: b79e960bc258106ce6cd22a599ea7af5abe782cd

manage_nic eth0 get ipaddr 9.13.7.183
[stdout] [info]load /usr/lib64/libvirt.so.0 success
[stdout]

R3 version load uvpconf
Load flavor JSON: /etc/avocado/vm_cfg/flavor.json done conf[362]
Commands configured by file: /usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/sysinfo/commands
Files configured by file: /usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/sysinfo/files
Profilers configured by file: /usr/local/lib/python3.7/site-packages/avocado_framework-77.0-py3.7.egg/avocado/etc/avocado/sysinfo/profilers
INIT 1-test.py:UTSTestCase.testcase
PARAMS (key=timeout, path=, default=None) => None
Test metadata:
filename: /Images/HUTAF/ev_v2r9_testcase_master/testcase/hst/hst_maintenance/kworker/hst_kworker_fun_004/test.py
teststmpdir: /var/tmp/avocado_753h_5ef
workdir: /var/tmp/avocado_8e7jmsn_/avocado_job_xiv0c3e9/1-test.py_UTSTestCase.testcase
[INFO] collect_log: True
[SYSLOG][INFO] Sysinfo configured by file: /etc/avocado/sysinfo/sysinfo.json
[SYSLOG][INFO] sysinfo log dir: /root/avocado/job-results/job-2021-11-19T10.10-b79e960/sysinfo
[SYSLOG][INFO] hypervisor type: KVM-FSO
[SYSLOG][INFO] Commands to collect: ['hugepageinfo', 'showfile /sys/devices/system/node/node
/hugepages/hugepages-/', 'tail_log -n 100 openvswitch%ovs-operations.log', 'tail_log -n 300 %var%log%sysmonitor%hotpatch_alarm.log', 'tail_log -n 300 %var%log%sysmonitor%process_monitor_ucompute.log', 'tail_log -n 300 %var%log%sysmonitor%process_monitor_unetwork.log', 'tail_log -n 300 %var%log%sysmonitor%ucompute_alarm.log', 'tail_log -n 300 %var%log%sysmonitor%unetwork_alarm.log','md5sum /lib64/libevsutils_mirror.so', 'md5sum /lib64/libevsutils_migrate.so', 'md5sum /lib64/libevsutils_bum.so', 'cat_image_dir', 'ls -l /Images/core', 'ls -l /usr/local/bin', 'ls -alrt /dev/mapper', 'ls /sys/kernel/config/cluster/ocfs2cluster/node', 'multipath -ll', 'which vnstat', 'dmesg', 'dmesg -T', 'lspci -vvnnxx', 'uptime', 'ifconfig -a', 'ip link', 'route -n', 'ip -6 route', 'numactl --hardware show', 'ps -e f', 'lsof', 'ps aux', 'virsh list --all', 'ovs-vsctl show', 'ovsdb-client list-tables', 'brctl show', 'free', 'netstat -nap', 'who -a', 'last', 'iptables-save', 'mount', 'ip netns list', 'df -mPT', 'route -A inet6 -n','lsblk', 'dmsetup table', 'hostname', 'rpm -qa', 'lsmod', 'systemctl --no-pagerlist-unit-files', 'cat /var/run/syslogd.pid', 'cat /etc/libvirt/qemu.conf', 'tail_log -n 300 messages', 'tail_log -n 100 openvswitch%ovs-vswitchd.log', 'tail_log -n 100 openvswitch%ovsdb-server.log', 'tail_log -n 600 libvirt%libvirtd.log','tail_log -n 600 libvirt%syslog', 'tail_log -n 300 %var%log%sysmonitor.log', 'tail_log -n 300 %var%log%osHealthCheck%osHealthCheck.log', 'tail_log -n 300 %var%log%tuned%tuned.log', 'tail_log -n 300 %var%log%vhostdp%vhostdp.log', 'tail_log -n 500 %var%log%fusionsphere%upgrade%gcnUpgrade%gcnUpgrade.log', 'md5sum /usr/sbin/ovs-vswitchd', 'ovs-vsctl list Open_vSwitch .', 'uvpconf', 'ls -ld /Images/ /Images/TestImg/ /var/ /var/log/ /var/log/libvirt/ /var/log/libvirt/qemu/ /etc/qemu/qemu.conf', 'systemctl list-units', 'cat /opt/uvp/selinux/monitor/alarm/', 'cat /opt/osfilecheck/result/osfilecheck-report']
[SYSLOG][INFO] Files to collect: ['/var/log/dpdk/dpdk.log', '/var/log/audit/audit.log', '/var/log/netflow.log', '/proc/cmdline', '/proc/mounts', '/proc/meminfo', '/proc/slabinfo', '/proc/modules', '/proc/iomem', '/proc/interrupts', '/etc/ssh/sshd_config', '/Images/HUTAF/UTS/log.txt', '/etc/rc.d/rc.local', '/opt/netdev_record.rcd', '/opt/net_type.rcd', '/Images/HUTAF_log/netdev_recover.log', '/sys/fs/cgroup/cpuset/system.slice/system-ovs.slice/cpuset.cpus', '/tmp/kbox_log.txt', '/opt/uvp/evs/user_evs_config', '/opt/uvp/evs/user_evs_data', '/opt/uvp/evs/hwoff_virtio_evs_data', '/var/log/uvpkmc.log', '/etc/kdump.conf', '/Images/hotreplace/hotreplace.log', '/var/log/evs/evs.log', '/etc/evs/evs.ini', '/var/log/uvp-cpu-qosd.log', '/etc/sysconfig/uvp-cpu-qosd']
[SYSLOG][INFO] Processes to collect: ['tail_log uvpconf.log', 'tail_log uvp_compute.log', 'tail_log openvswitch%ovs-operations.log', 'vmstat 5', 'iostat -dmx 5-t', 'avocado_sysinfo grab 9.11.0.10', 'avocado_sysinfo grab 129.9.2.159', 'tail_log messages', 'tail_log libvirt%libvirtd.log', 'tail_log libvirt%syslog', 'tail_log openvswitch%ovs-vswitchd.log', 'tail_log openvswitch%ovsdb-server.log', 'tail_log qemulog', 'tail_log sysmonitor.log', 'tail_log sysalarm.log', 'tail_logvhostdp%vhostdp.log', 'tail_log fusionsphere%upgrade%gcnUpgrade%gcnUpgrade.log', 'top -b -d 10']
START 1-test.py:UTSTestCase.testcase
wait_running(): wait 9.13.7.183 accessible 0 times success
wait_running(): wait 9.13.7.183:22 running 0 times success
System manufacturer info: _ret: 0, _out: Huawei, _err:
CoreDump info has been updated. Existing CoreDumps: []
[warn]localhost transalarm exec error stderr: alarm reported by sysalarm will be written in /var/log/trans-alarm
[info]localhost transalarm entry: ['[Id:1005][Type:1][Level:2][sec:1637285963][usec:935224][paras:ovsdb-server is abnormal][exparas:ovsdb-server]']
[INFO]CASE_NAME: hst_kworker_fun_004
[SYSLOG][INFO] There's no valid remote host, remote log job skipped...
[INFO]CASE_DIR: /Images/HUTAF/ev_v2r9_testcase_master/testcase/hst/hst_maintenance/kworker/hst_kworker_fun_004/test.py
[WARN]R3 import uvprdt pylib failed.
_cmd: top -b -n 1 -d 1 ssh_dict: {'ip': '9.13.7.183', 'usr': 'root', 'passwd': 'Nbhqv46#$', 'port': '22'}
cat /proc/meminfo
HugePages_Total has not unit in line HugePages_Total: 0.
HugePages_Free has not unit in line HugePages_Free: 0.
HugePages_Rsvd has not unit in line HugePages_Rsvd: 0.
HugePages_Surp has not unit in line HugePages_Surp: 0.
********* node 9.13.7.183 SW health check start ********
[SW health check] Host SW uvp_version: EulerOS-Virtual-V200R010C00SPC300B030
[SW health check] Host SW compiletime: 2021-11-19-00-06-10
[SW health check] Host GCN SW gcn_version: nStack-EVS-3.2.0.B001
[SW health check] Host GCN SW compiletime: 2021-10-25-00-54-24
[SW health check] Host GCN SW buildtime: 2021-10-25-00-54-24
[SW health check] Host system manufacturer: Huawei
[SW health check] Host CPU usage percent: 0.7
[SW health check] Host memory info: {'MemTotal': '535286208 kB', 'MemFree': '525176512 kB', 'MemAvailable': '485244416 kB', 'Buffers': '34368 kB', 'Cached': '674880 kB'}
[SW health check] Host hugepage info: {'HugePages_Total': '0', 'HugePages_Free': '0', 'HugePages_Rsvd': '0', 'HugePages_Surp': '0', 'Hugepagesize': '524288 kB'}
[SW health check] Module check skipped...
[SW health check] Nic: the storage_nic:eth1 belong to bridge br1! Check and recover operation skipped...
[SW health check] Process check skipped...
[SW health check] Service: sysmonitor status: running
[SW health check] Service: libvirtd status: running
[SW health check] Service: openvswitch status: failed
[SW health check] Service: start openvswitch failed!
[SW health check] VM: node 9.13.7.183 vm is clear
********* node 9.13.7.183 SW health check end ********
[IMAGE_OPERATION][9.13.7.183] sync command finished
================================ setUp ================================
System manufacturer info: _ret: 0, _out: Huawei, _err:
[setUp] prepare vm0:EulerOS_arm_V2R8SPC300B630: NONE
Connecting to libvirt: qemu:///system
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630'
[setUp] prepare vm1:EulerOS_arm_V2R8SPC300B630_1: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_1'
[setUp] prepare vm2:EulerOS_arm_V2R8SPC300B630_2: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_2'
[setUp] prepare vm3:EulerOS_arm_V2R8SPC300B630_3: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_3'
[setUp] prepare vm4:EulerOS_arm_V2R8SPC300B630_4: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_4'
[setUp] prepare vm5:EulerOS_arm_V2R8SPC300B630_5: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_5'
[setUp] prepare vm6:EulerOS_arm_V2R8SPC300B630_6: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_6'
[setUp] prepare vm7:EulerOS_arm_V2R8SPC300B630_7: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_7'
[setUp] prepare vm8:EulerOS_arm_V2R8SPC300B630_8: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_8'
[setUp] prepare vm9:EulerOS_arm_V2R8SPC300B630_9: NONE
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'EulerOS_arm_V2R8SPC300B630_9'
================================ runTest ================================
start stress /dev/mapper/3654511b1002fa2ec8464e55c0000014b

[ATOM][BEGIN] hst_kworker_fun_004/test.py:68->uvp_virt.uvp_reliability_test.stress_to_all.inject(args=(), kwargs={'filename': '/dev/mapper/3654511b1002fa2ec8464e55c0000014b', 'rw': 'randread', 'bs': 4, 'size': 30, 'iodepth': 32, 'name': '4K0R0RD1', 'output': '/home/4K0R0RD1.log', 'runtime': 500})
fio cmd is :
/home/uts/data/bin/fault_inject/common/uvp/fio/fio_arm -filename=/dev/mapper/3654511b1002fa2ec8464e55c0000014b -direct=1 -rw=randread -bs=4k -size=30G -iodepth=32 -ioengine=libaio -numjobs=1 -group_reporting -name=4K0R0RD1 -output=/home/4K0R0RD1.log -time_based -ramp_time=0 -runtime=500 -rwmixwrite=50
root 445876 440440 10 10:10 pts/0 00:00:00 /home/uts/data/bin/fault_inject/common/uvp/fio/fio_arm -filename=/dev/mapper/3654511b1002fa2ec8464e55c0000014b -direct=1 -rw=randread -bs=4k -size=30G -iodepth=32 -ioengine=libaio -numjobs=1 -group_reporting -name=4K0R0RD1 -output=/home/4K0R0RD1.log -time_based -ramp_time=0 -runtime=500 -rwmixwrite=50
root 446064 445876 40 10:10 ? 00:00:02 /home/uts/data/bin/fault_inject/common/uvp/fio/fio_arm -filename=/dev/mapper/3654511b1002fa2ec8464e55c0000014b -direct=1 -rw=randread -bs=4k -size=30G -iodepth=32 -ioengine=libaio -numjobs=1 -group_reporting -name=4K0R0RD1 -output=/home/4K0R0RD1.log -time_based -ramp_time=0 -runtime=500 -rwmixwrite=50
ret is
['445876', '446064']
[ATOM][END ] hst_kworker_fun_004/test.py:68->0
['22', '27', '3', '1']
['22', '27', '3', '1', '41', '50', '44', '52']
['22', '27', '3', '1', '41', '50', '44', '52', '67', '69', '74', '91']
['22', '27', '3', '1', '41', '50', '44', '52', '67', '69', '74', '91', '116', '122', '119', '103']
/sys/devices/virtual/workqueue/cpumask 04900080,08000428,00141200,0840000a 22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103
bind irq to node 0, cpulist is:[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]

[ATOM][BEGIN] hst_kworker_fun_004/test.py:93->uvp_virt.compute.libvirt.kvm.driver.add_cpubanned(args= 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:93->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:94->uvp_virt.compute.libvirt.kvm.driver.del_cpubanned(args=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:94->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:96->uvp_virt.network.base.driver.restart_service(args=('irqbalance', 'restart'), kwargs={})
systemctl cmd: service irqbalance restart, ret: 0, stdout: , stderr: Redirecting to /bin/systemctl restart irqbalance.service
[ATOM][END ] hst_kworker_fun_004/test.py:96->0
cpu of iscsi kworker 22 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]
cpu of iscsi kworker 22 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]
cpu of iscsi kworker 27 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]
bind irq to node 1, cpulist is:[32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]

[ATOM][BEGIN] hst_kworker_fun_004/test.py:93->uvp_virt.compute.libvirt.kvm.driver.add_cpubanned(args= 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:93->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:94->uvp_virt.compute.libvirt.kvm.driver.del_cpubanned(args=32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:94->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:96->uvp_virt.network.base.driver.restart_service(args=('irqbalance', 'restart'), kwargs={})
systemctl cmd: service irqbalance restart, ret: 0, stdout: , stderr: Redirecting to /bin/systemctl restart irqbalance.service
[ATOM][END ] hst_kworker_fun_004/test.py:96->0
cpu of iscsi kworker 50 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
cpu of iscsi kworker 44 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
cpu of iscsi kworker 44 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
bind irq to node 2, cpulist is:[64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95]

[ATOM][BEGIN] hst_kworker_fun_004/test.py:93->uvp_virt.compute.libvirt.kvm.driver.add_cpubanned(args= 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:93->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:94->uvp_virt.compute.libvirt.kvm.driver.del_cpubanned(args=64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:94->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:96->uvp_virt.network.base.driver.restart_service(args=('irqbalance', 'restart'), kwargs={})
systemctl cmd: service irqbalance restart, ret: 0, stdout: , stderr: Redirecting to /bin/systemctl restart irqbalance.service
[ATOM][END ] hst_kworker_fun_004/test.py:96->0
cpu of iscsi kworker 67 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95]
cpu of iscsi kworker 74 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95]
cpu of iscsi kworker 69 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95]
bind irq to node 3, cpulist is:[96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]

[ATOM][BEGIN] hst_kworker_fun_004/test.py:93->uvp_virt.compute.libvirt.kvm.driver.add_cpubanned(args= 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:93->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:94->uvp_virt.compute.libvirt.kvm.driver.del_cpubanned(args=96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127, kwargs={})
[ATOM][END ] hst_kworker_fun_004/test.py:94->0

[ATOM][BEGIN] hst_kworker_fun_004/test.py:96->uvp_virt.network.base.driver.restart_service(args=('irqbalance', 'restart'), kwargs={})
systemctl cmd: service irqbalance restart, ret: 0, stdout: , stderr: Redirecting to /bin/systemctl restart irqbalance.service
[ATOM][END ] hst_kworker_fun_004/test.py:96->0
cpu of iscsi kworker 103 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
cpu of iscsi kworker 116 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
cpu of iscsi kworker 119 is in kworker bound cpulist [22,27,3,1,41,50,44,52,67,69,74,91,116,122,119,103] and node cpulist[96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
================================ tearDown ================================
wait_running(): wait 9.13.7.183 accessible 0 times success
wait_running(): wait 9.13.7.183:22 running 0 times success
[local]:
2021-11-19T10:08:20.748920+08:00|err|libvirtd[21128]|[21128]|virNetSocketReadWire[1829]|: End of file while reading data: Input/output error2021-11-19T10:10:43.108928+08:00|err|sshd[440920]|error: kex_exchange_identification: Connection closed by remote host
2021-11-19T10:10:47.546625+08:00|err|ovs-vsctl[442783]|ctl_fatal[2431]|00001|db_ctl_base|: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)
2021-11-19T10:10:47.702845+08:00|err|ovs-vsctl[442876]|ctl_fatal[2431]|00001|db_ctl_base|: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)
2021-11-19T10:10:47.868477+08:00|err|ovs-vsctl[442942]|ctl_fatal[2431]|00001|db_ctl_base|: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)
2021-11-19T10:10:48.044722+08:00|err|ovs-vsctl[443130]|ctl_fatal[2431]|00001|db_ctl_base|: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)
2021-11-19T10:10:48.136741+08:00|info|sh[443206]|2021-11-19T10:10:48Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:48.148435+08:00|err|ovs-vsctl[443209]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:48.204638+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:10:50.029419+08:00|info|sh[444226]|2021-11-19T10:10:50Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:50.041556+08:00|err|ovs-vsctl[444252]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:50.144421+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:10:50.402396+08:00|info|sh[444468]|2021-11-19T10:10:50Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:50.414273+08:00|err|ovs-vsctl[444471]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:50.511374+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:10:50.759053+08:00|info|sh[444665]|2021-11-19T10:10:50Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:50.770718+08:00|err|ovs-vsctl[444672]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:50.867233+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:10:51.228085+08:00|info|sh[444970]|2021-11-19T10:10:51Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:51.239874+08:00|err|ovs-vsctl[444973]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:51.344076+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:10:51.617640+08:00|info|sh[445157]|2021-11-19T10:10:51Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:51.630070+08:00|err|ovs-vsctl[445160]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:51.753159+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:10:55.356856+08:00|info|sh[446187]|2021-11-19T10:10:55Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:55.372655+08:00|err|ovs-vsctl[446190]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:55.498881+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:10:58.376705+08:00|info|sh[446873]|2021-11-19T10:10:58Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:10:58.388548+08:00|err|ovs-vsctl[446876]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:10:58.464358+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:11:01.770060+08:00|info|sh[448146]|2021-11-19T10:11:01Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:11:01.782723+08:00|err|ovs-vsctl[448179]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:11:01.911543+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:11:08.073376+08:00|info|sh[453063]|2021-11-19T10:11:08Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:11:08.085835+08:00|err|ovs-vsctl[453069]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:11:08.193913+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:11:14.430875+08:00|info|sh[456752]|2021-11-19T10:11:14Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:11:14.443374+08:00|err|ovs-vsctl[456769]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:11:14.538993+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:11:23.754805+08:00|info|sh[460715]|2021-11-19T10:11:23Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:11:23.767103+08:00|err|ovs-vsctl[460733]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:11:23.871249+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
2021-11-19T10:11:36.164216+08:00|info|sh[468571]|2021-11-19T10:11:36Z|00001|vswitchd|ERR|failed to Patch Init ALL!
2021-11-19T10:11:36.177256+08:00|err|ovs-vsctl[468587]|ctl_fatal[2431]|00002|db_ctl_base|: ovs-version=: argument does not end in "=" followed by a value.
2021-11-19T10:11:36.303619+08:00|err|systemd[1]|Failed to start Open vSwitch Database Unit.
[tearDown]testcase tearDown launch.
[tearDown]testcase tearDown end.
[tearDown]framework tearDown launch.
[tearDown]framework tearDown end.
DATA (filename=output.expected) => NOT FOUND (data sources: variant, test, file)
DATA (filename=stdout.expected) => NOT FOUND (data sources: variant, test, file)
DATA (filename=stderr.expected) => NOT FOUND (data sources: variant, test, file)
[info]localhost transalarm exec success
[info]localhost new transalarm entry: ['[Id:1005][Type:1][Level:2][sec:1637287863][usec:656791][paras:ovsdb-server is abnormal][exparas:ovsdb-server]']
[tearDown] alarm_check has no new alarm records.
[Thread-394] <class 'threading.Thread'>
[Thread-394] File "/usr/lib64/python3.7/threading.py", line 905, in _bootstrap
[Thread-394] self._bootstrap_inner()
[Thread-394] File "/usr/lib64/python3.7/threading.py", line 942, in _bootstrap_inner
[Thread-394] self.run()
[Thread-394] File "/usr/lib64/python3.7/threading.py", line 885, in run
[Thread-394] self._target(*self._args, **self._kwargs)
[Thread-394] File "/usr/local/lib/python3.7/site-packages/uvp_virt-0.1.0-py3.7.egg/uvp_virt/utils/process.py", line 57, in _call
[Thread-394] result = target(*args)
[Thread-394] File "/usr/local/lib/python3.7/site-packages/uvp_virt-0.1.0-py3.7.egg/uvp_virt/utils/utils.py", line 115, in _fabric_result
[Thread-394] _result = _FabricResult.parse(self._promise.join())
[Thread-394] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 1558, in join
[Thread-394] return self.runner._finish()
[Thread-394] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 437, in _finish
[Thread-394] self.wait()
[Thread-394] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 946, in wait
[Thread-394] time.sleep(self.input_sleep)
[Thread-393] <class 'invoke.util.ExceptionHandlingThread'>
[Thread-393] File "/usr/lib64/python3.7/threading.py", line 905, in _bootstrap
[Thread-393] self._bootstrap_inner()
[Thread-393] File "/usr/lib64/python3.7/threading.py", line 942, in _bootstrap_inner
[Thread-393] self.run()
[Thread-393] File "/usr/local/lib/python3.7/site-packages/invoke/util.py", line 234, in run
[Thread-393] super(ExceptionHandlingThread, self).run()
[Thread-393] File "/usr/lib64/python3.7/threading.py", line 885, in run
[Thread-393] self._target(*self._args, **self.kwargs)
[Thread-393] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 750, in handle_stderr
[Thread-393] buffer
, hide, output, reader=self.read_proc_stderr
[Thread-393] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 703, in _handle_output
[Thread-393] for data in self.read_proc_output(reader):
[Thread-393] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 676, in read_proc_output
[Thread-393] data = reader(self.read_chunk_size)
[Thread-393] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 1220, in read_proc_stderr
[Thread-393] return os.read(self.process.stderr.fileno(), num_bytes)
[Thread-392] <class 'invoke.util.ExceptionHandlingThread'>
[Thread-392] File "/usr/lib64/python3.7/threading.py", line 905, in _bootstrap
[Thread-392] self._bootstrap_inner()
[Thread-392] File "/usr/lib64/python3.7/threading.py", line 942, in _bootstrap_inner
[Thread-392] self.run()
[Thread-392] File "/usr/local/lib/python3.7/site-packages/invoke/util.py", line 234, in run
[Thread-392] super(ExceptionHandlingThread, self).run()
[Thread-392] File "/usr/lib64/python3.7/threading.py", line 885, in run
[Thread-392] self._target(*self._args, **self.kwargs)
[Thread-392] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 737, in handle_stdout
[Thread-392] buffer
, hide, output, reader=self.read_proc_stdout
[Thread-392] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 703, in _handle_output
[Thread-392] for data in self.read_proc_output(reader):
[Thread-392] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 676, in read_proc_output
[Thread-392] data = reader(self.read_chunk_size)
[Thread-392] File "/usr/local/lib/python3.7/site-packages/invoke/runners.py",line 1214, in read_proc_stdout
[Thread-392] data = os.read(self.process.stdout.fileno(), num_bytes)
Children of _run_avocado process: [psutil.Process(pid=445876, name='fio_arm', started='10:10:53'), psutil.Process(pid=446064, name='fio_arm', started='10:10:54')]
WARN 1-test.py:UTSTestCase.testcase -> TestWarn: Test passed but there were warnings during execution. Check the log for details.

Children of _run_avocado process: [psutil.Process(pid=445876, name='fio_arm', started='10:10:53'), psutil.Process(pid=446064, name='fio_arm', started='10:10:54')]
Test results available in /root/avocado/job-results/job-2021-11-19T10.10-b79e960
Not logging /var/log/evs/evs.log (file does not exist)
Not logging /etc/evs/evs.ini (file does not exist)
Not logging /var/log/netflow.log (file does not exist)
Not logging /var/log/audit/audit.log (file does not exist)
Not logging /tmp/kbox_log.txt (file does not exist)
Not logging /Images/hotreplace/hotreplace.log (file does not exist)
Daemon process 'tail_log fusionsphere%upgrade%gcnUpgrade%gcnUpgrade.log' (pid 438502) terminated abnormally (code 1)

这个贴的是测试日志吗?

是的,自验的测试记录

这个特性也做一些介绍吧,方便大家了解。

特性介绍:
在ipsan存储场景中,主机上iscsi在处理存储的报文的时候是通过将任务丢给kworker处理的,默认的情况下这个kworker是unbind不确定会跑到哪个cpu,所以会出现存储报文收发的网卡所属numa node,跟处理iscsi的kworker跑的numa node并不相同,这种情况下会出现跨numa node的内存拷贝,或者跨片内存拷贝,在arm架构下跨会出现性能急剧恶化。本特性通过让iscsi创建numa感知的kworker来解决这个问题,每次iscsi下发任务的kworker都会运行在存储网卡所在的numa,避免跨numa node跨片导致的性能损耗,实测性能提升30%+

XieXiuQi changed issue type from 任务 to 需求
XieXiuQi changed issue state from 待办的 to 新建
zhengzengkai throughsrc-openeuler/kernel Pull Request !409 changed issue state from 新建 to 已完成

诚邀Issue的创建人,负责人,协作人以及评论人对此次Issue解决过程给予评价:

   0   1   2   3   4   5   6   7   8   9   10  

 不满意                        非常满意

Sign in to comment

Status
Assignees
Projects
Milestones
Pull Requests
Successfully merging a pull request will close this issue.
Branches
Planed to start   -   Planed to end
-
Top level
Priority
Duration (hours)
参与者(6)
5329419 openeuler ci bot 1632792936 9968373 openeuler survey bot 1637036855
C
1
https://toscode.gitee.com/openeuler/kernel.git
git@toscode.gitee.com:openeuler/kernel.git
openeuler
kernel
kernel

Search