From rosaliarose1106@gmail.com Wed Apr 3 11:13:39 2024
From: rosaliarose1106@gmail.com
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Vlone Store - Buy Authentic Vlone Shirts & Hoodies.
Date: Wed, 03 Apr 2024 11:13:36 +0000
Message-ID:
<171214281650.179655.8308447123627727757@ip-172-30-36-63.ec2.internal>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============0171998196771753339=="
--===============0171998196771753339==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Transform your style at the Vlone Store. Explore exclusive streetwear collections for a trendy look. Elevate your f=
ashion. Shop now!
--===============0171998196771753339==--
From rosaliarose1106@gmail.com Wed Apr 3 11:13:51 2024
From: rosaliarose1106@gmail.com
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Vlone Store - Buy Authentic Vlone Shirts & Hoodies.
Date: Wed, 03 Apr 2024 11:13:49 +0000
Message-ID:
<171214282925.179655.2412702001889069164@ip-172-30-36-63.ec2.internal>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============7280367866144376510=="
--===============7280367866144376510==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Transform your style at the Vlone Store. Explore exclusive streetwear collections for a trendy look. Elevate your f=
ashion. Shop now!
--===============7280367866144376510==--
From rosaliarose1106@gmail.com Wed Apr 3 11:15:04 2024
From: rosaliarose1106@gmail.com
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Vlone Store - Buy Authentic Vlone Shirts & Hoodies.
Date: Wed, 03 Apr 2024 11:15:02 +0000
Message-ID:
<171214290221.179653.10916843503523223322@ip-172-30-36-63.ec2.internal>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============1876208884053174834=="
--===============1876208884053174834==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Thank you for your appreciation of our content! Explore our extensive collect=
ion of high-quality Vlone shirts and hoodies at https://vlonestore.co/. Your =
support is truly valued as we strive to maintain excellence in our work.
--===============1876208884053174834==--
From psuchanecki@almalinux.org Wed Apr 10 00:08:35 2024
From: Pawel Suchanecki
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Re: AlmaLinux Appstream
Date: Tue, 09 Apr 2024 13:19:41 +0200
Message-ID:
In-Reply-To:
<170713684920.9159.12471245097020069420@ip-172-30-36-63.ec2.internal>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============8642415859090466093=="
--===============8642415859090466093==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi ccto,
Yes, you are correct in boths cases. As a general rule, we follow exactly
RHEL Application Streams Release Life Cycles.
Thanks for asking & sorry for the very long delay :blush:
Have a nice day,
Pawel
Pawel Suchanecki
Evangelist / AlmaLinux.org
The open-source EL: long-term stability trusted by top organizations.
On Mon, Feb 5, 2024 at 1:41 PM wrote:
> Hi AlmaLinux, Pawel Suchanecki, and others,
>
> May I ask - be specifically,
>
> in AlmaLinux 8, how long MariaDB 10.3 will be maintained , till 2029 ?
>
> in AlmaLinux 9, how long MariaDB 10.5 will be maintained , till 2032 ?
>
> Both 2 versions are inside RHEL 8 (and RHEL 9) Full Life Application
> Streams Release Life Cycle.
>
> Thank you very much for your kind attention.
>
> Regards
> ccto.
> _______________________________________________
> AlmaLinux Users mailing list -- users(a)lists.almalinux.org
> To unsubscribe send an email to users-leave(a)lists.almalinux.org
>
--===============8642415859090466093==
Content-Type: text/html
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
MIME-Version: 1.0
PGRpdiBkaXI9Imx0ciI+SGkgY2N0byw8ZGl2Pjxicj48L2Rpdj48ZGl2PlllcywgeW91IGFyZcKg
Y29ycmVjdCBpbiBib3RocyBjYXNlcy7CoCBBcyBhIGdlbmVyYWwgcnVsZSwgd2UgZm9sbG93IGV4
YWN0bHkgUkhFTCBBcHBsaWNhdGlvbiBTdHJlYW1zIFJlbGVhc2UgTGlmZSBDeWNsZXMuwqA8L2Rp
dj48ZGl2Pjxicj48L2Rpdj48ZGl2PlRoYW5rcyBmb3IgYXNraW5nICZhbXA7IHNvcnJ5IGZvciB0
aGUgdmVyeSBsb25nIGRlbGF5IDpibHVzaDo8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkhhdmUg
YSBuaWNlIGRheSw8L2Rpdj48ZGl2PlBhd2VsPC9kaXY+PGRpdj48YnI+PGRpdj48YnIgY2xlYXI9
ImFsbCI+PGRpdj48ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfc2lnbmF0dXJlIiBkYXRhLXNt
YXJ0bWFpbD0iZ21haWxfc2lnbmF0dXJlIj48ZGl2IGRpcj0ibHRyIj5QYXdlbCBTdWNoYW5lY2tp
wqA8ZGl2PkV2YW5nZWxpc3QgLyBBbG1hTGludXgub3JnPC9kaXY+PGRpdj48aW1nIHNyYz0iaHR0
cHM6Ly9jaTMuZ29vZ2xldXNlcmNvbnRlbnQuY29tL21haWwtc2lnL0FJb3JLNHdYdHNFMkVHdzZU
VVYxejJESDBFVVpnZTVxZGJPLTVERFlyY2lOZjhCazRLLVlqYnBjRjhrU3pSVjN6cUQ2N1Y1U1NM
d3hrLW8iPjxicj48L2Rpdj48ZGl2PlRoZSBvcGVuLXNvdXJjZSBFTDogbG9uZy10ZXJtIHN0YWJp
bGl0eSB0cnVzdGVkIGJ5IHRvcCBvcmdhbml6YXRpb25zLjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2
Pjxicj48L2Rpdj48L2Rpdj48L2Rpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxkaXYg
ZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9hdHRyIj5PbiBNb24sIEZlYiA1LCAyMDI0IGF0IDE6NDHi
gK9QTSAmbHQ7PGEgaHJlZj0ibWFpbHRvOmdlb3JnZS5jY3RvQGdtYWlsLmNvbSI+Z2VvcmdlLmNj
dG9AZ21haWwuY29tPC9hPiZndDsgd3JvdGU6PGJyPjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJn
bWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdC13
aWR0aDoxcHg7Ym9yZGVyLWxlZnQtc3R5bGU6c29saWQ7Ym9yZGVyLWxlZnQtY29sb3I6cmdiKDIw
NCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij5IaSBBbG1hTGludXgsIFBhd2VsIFN1Y2hhbmVj
a2ksIGFuZCBvdGhlcnMsPGJyPgo8YnI+Ck1heSBJIGFzayAtIGJlIHNwZWNpZmljYWxseSw8YnI+
Cjxicj4KaW4gQWxtYUxpbnV4IDgsIGhvdyBsb25nIE1hcmlhREIgMTAuMyB3aWxsIGJlIG1haW50
YWluZWQgLCB0aWxsIDIwMjkgPzxicj4KPGJyPgppbiBBbG1hTGludXggOSwgaG93IGxvbmcgTWFy
aWFEQiAxMC41IHdpbGwgYmUgbWFpbnRhaW5lZCAsIHRpbGwgMjAzMiA/IDxicj4KPGJyPgpCb3Ro
IDIgdmVyc2lvbnMgYXJlIGluc2lkZSBSSEVMIDggKGFuZCBSSEVMIDkpIEZ1bGwgTGlmZSBBcHBs
aWNhdGlvbiBTdHJlYW1zIFJlbGVhc2UgTGlmZSBDeWNsZS48YnI+Cjxicj4KVGhhbmsgeW91IHZl
cnkgbXVjaCBmb3IgeW91ciBraW5kIGF0dGVudGlvbi48YnI+Cjxicj4KUmVnYXJkczxicj4KY2N0
by48YnI+Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJy
PgpBbG1hTGludXggVXNlcnMgbWFpbGluZyBsaXN0IC0tIDxhIGhyZWY9Im1haWx0bzp1c2Vyc0Bs
aXN0cy5hbG1hbGludXgub3JnIiB0YXJnZXQ9Il9ibGFuayI+dXNlcnNAbGlzdHMuYWxtYWxpbnV4
Lm9yZzwvYT48YnI+ClRvIHVuc3Vic2NyaWJlIHNlbmQgYW4gZW1haWwgdG8gPGEgaHJlZj0ibWFp
bHRvOnVzZXJzLWxlYXZlQGxpc3RzLmFsbWFsaW51eC5vcmciIHRhcmdldD0iX2JsYW5rIj51c2Vy
cy1sZWF2ZUBsaXN0cy5hbG1hbGludXgub3JnPC9hPjxicj4KPC9ibG9ja3F1b3RlPjwvZGl2Pgo=
--===============8642415859090466093==--
From alessandro.baggi@gmail.com Wed Apr 10 15:36:02 2024
From: Alessandro Baggi
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Almalinux and integritysetup speed problem
Date: Wed, 10 Apr 2024 17:31:25 +0200
Message-ID: <19a009d4-425b-4ded-9856-86f3ee75f445@gmail.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============3011413703585698995=="
--===============3011413703585698995==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Hi list,
I'm trying on a spare machine dm-integrity with mdadm raid1.
This is an old machine (i7-2600k). I'm using 2x500GB wd caviar black SATA3.
I'm trying to run some test and see how much performance changes using
dm-integrity.
First I created a raid1 with mdadm and checked the performances and
writing 50G I got 100 MB/s
Then I destroied the md device and on every disk I run:
# integritysetup format --integrity xxhash64 /dev/sdb1
# integritysetup format --integrity xxhash64 /dev/sdc1
During this process the performances was good ~95MB/s.
After this I opened the devices with:
# integritysetup open integrity xxhash64 /dev/sdb1 sdb1
# integritysetup open integrity xxhash64 /dev/sdc1 sdc1
and created the mdadm array with:
# mdadm --create /dev/md10 --level=raid1 --raid-devices=2
/dev/mapper/sdb1 /dev/mapper/sdc1
and reading on /proc/mdstat I got this:
[>....................] resync = 1.1% (4929792/443175424)
finish=677.3min speed=10782K/sec
Why there is so big drop on speed during the sync?
I'm missing something?
I'll need 11 hours to sync 2x500GB hdd? Why so slow?
Before this I tried the same on a newer machine with i7 8700k and 2x2TB
WD gold and I get a drop sync speed at ~35MB/s.
There is something that I can do to improve this?
Thank you in advance.
Alessandro.
--===============3011413703585698995==--
From jonathan@almalinux.org Wed Apr 10 15:48:03 2024
From: Jonathan Wright
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Re: Almalinux and integritysetup speed problem
Date: Wed, 10 Apr 2024 10:44:51 -0500
Message-ID:
In-Reply-To: <19a009d4-425b-4ded-9856-86f3ee75f445@gmail.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============3494385975070880146=="
--===============3494385975070880146==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
You can adjust the mdadm rebuild rate pretty easily. By default it's quite
slow to avoid causing strain on system resources.
See #1 at
https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html
On Wed, Apr 10, 2024 at 10:36 AM Alessandro Baggi <
alessandro.baggi(a)gmail.com> wrote:
> Hi list,
> I'm trying on a spare machine dm-integrity with mdadm raid1.
> This is an old machine (i7-2600k). I'm using 2x500GB wd caviar black SATA3.
>
> I'm trying to run some test and see how much performance changes using
> dm-integrity.
>
> First I created a raid1 with mdadm and checked the performances and
> writing 50G I got 100 MB/s
>
> Then I destroied the md device and on every disk I run:
>
> # integritysetup format --integrity xxhash64 /dev/sdb1
> # integritysetup format --integrity xxhash64 /dev/sdc1
>
> During this process the performances was good ~95MB/s.
> After this I opened the devices with:
>
> # integritysetup open integrity xxhash64 /dev/sdb1 sdb1
> # integritysetup open integrity xxhash64 /dev/sdc1 sdc1
>
> and created the mdadm array with:
>
> # mdadm --create /dev/md10 --level=raid1 --raid-devices=2
> /dev/mapper/sdb1 /dev/mapper/sdc1
>
> and reading on /proc/mdstat I got this:
>
> [>....................] resync = 1.1% (4929792/443175424)
> finish=677.3min speed=10782K/sec
>
> Why there is so big drop on speed during the sync?
>
> I'm missing something?
>
> I'll need 11 hours to sync 2x500GB hdd? Why so slow?
>
> Before this I tried the same on a newer machine with i7 8700k and 2x2TB
> WD gold and I get a drop sync speed at ~35MB/s.
>
> There is something that I can do to improve this?
>
> Thank you in advance.
>
> Alessandro.
> _______________________________________________
> AlmaLinux Users mailing list -- users(a)lists.almalinux.org
> To unsubscribe send an email to users-leave(a)lists.almalinux.org
>
--
Jonathan Wright
AlmaLinux Foundation
Mattermost: chat
--===============3494385975070880146==
Content-Type: text/html
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
MIME-Version: 1.0
PGRpdiBkaXI9Imx0ciI+PGRpdj5Zb3UgY2FuIGFkanVzdCB0aGUgbWRhZG0gcmVidWlsZCByYXRl
IHByZXR0eSBlYXNpbHkuwqAgQnkgZGVmYXVsdCBpdCYjMzk7cyBxdWl0ZSBzbG93IHRvIGF2b2lk
IGNhdXNpbmcgc3RyYWluIG9uIHN5c3RlbSByZXNvdXJjZXMuPC9kaXY+PGRpdj48YnI+PC9kaXY+
PGRpdj5TZWUgIzEgYXQgPGEgaHJlZj0iaHR0cHM6Ly93d3cuY3liZXJjaXRpLmJpei90aXBzL2xp
bnV4LXJhaWQtaW5jcmVhc2UtcmVzeW5jLXJlYnVpbGQtc3BlZWQuaHRtbCI+aHR0cHM6Ly93d3cu
Y3liZXJjaXRpLmJpei90aXBzL2xpbnV4LXJhaWQtaW5jcmVhc2UtcmVzeW5jLXJlYnVpbGQtc3Bl
ZWQuaHRtbDwvYT48L2Rpdj48L2Rpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxkaXYg
ZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9hdHRyIj5PbiBXZWQsIEFwciAxMCwgMjAyNCBhdCAxMDoz
NuKAr0FNIEFsZXNzYW5kcm8gQmFnZ2kgJmx0OzxhIGhyZWY9Im1haWx0bzphbGVzc2FuZHJvLmJh
Z2dpQGdtYWlsLmNvbSI+YWxlc3NhbmRyby5iYWdnaUBnbWFpbC5jb208L2E+Jmd0OyB3cm90ZTo8
YnI+PC9kaXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjBw
eCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQpO3Bh
ZGRpbmctbGVmdDoxZXgiPkhpIGxpc3QsPGJyPgpJJiMzOTttIHRyeWluZyBvbiBhIHNwYXJlIG1h
Y2hpbmUgZG0taW50ZWdyaXR5IHdpdGggbWRhZG0gcmFpZDEuPGJyPgpUaGlzIGlzIGFuIG9sZCBt
YWNoaW5lIChpNy0yNjAwaykuIEkmIzM5O20gdXNpbmcgMng1MDBHQiB3ZCBjYXZpYXIgYmxhY2sg
U0FUQTMuPGJyPgo8YnI+CkkmIzM5O20gdHJ5aW5nIHRvIHJ1biBzb21lIHRlc3QgYW5kIHNlZSBo
b3cgbXVjaCBwZXJmb3JtYW5jZSBjaGFuZ2VzIHVzaW5nIDxicj4KZG0taW50ZWdyaXR5Ljxicj4K
PGJyPgpGaXJzdCBJIGNyZWF0ZWQgYSByYWlkMSB3aXRoIG1kYWRtIGFuZCBjaGVja2VkIHRoZSBw
ZXJmb3JtYW5jZXMgYW5kIDxicj4Kd3JpdGluZyA1MEcgSSBnb3QgMTAwIE1CL3M8YnI+Cjxicj4K
VGhlbiBJIGRlc3Ryb2llZCB0aGUgbWQgZGV2aWNlIGFuZCBvbiBldmVyeSBkaXNrIEkgcnVuOjxi
cj4KPGJyPgrCoCDCoCDCoCDCoCAjIGludGVncml0eXNldHVwIGZvcm1hdCAtLWludGVncml0eSB4
eGhhc2g2NCAvZGV2L3NkYjE8YnI+CsKgIMKgIMKgIMKgICMgaW50ZWdyaXR5c2V0dXAgZm9ybWF0
IC0taW50ZWdyaXR5IHh4aGFzaDY0IC9kZXYvc2RjMTxicj4KPGJyPgpEdXJpbmcgdGhpcyBwcm9j
ZXNzIHRoZSBwZXJmb3JtYW5jZXMgd2FzIGdvb2Qgfjk1TUIvcy48YnI+CkFmdGVyIHRoaXMgSSBv
cGVuZWQgdGhlIGRldmljZXMgd2l0aDo8YnI+Cjxicj4KwqAgwqAgwqAgwqAgIyBpbnRlZ3JpdHlz
ZXR1cCBvcGVuIGludGVncml0eSB4eGhhc2g2NCAvZGV2L3NkYjEgc2RiMTxicj4KwqAgwqAgwqAg
wqAgIyBpbnRlZ3JpdHlzZXR1cCBvcGVuIGludGVncml0eSB4eGhhc2g2NCAvZGV2L3NkYzEgc2Rj
MTxicj4KPGJyPgphbmQgY3JlYXRlZCB0aGUgbWRhZG0gYXJyYXkgd2l0aDo8YnI+Cjxicj4KwqAg
wqAgwqAgwqAgIyBtZGFkbSAtLWNyZWF0ZSAvZGV2L21kMTAgLS1sZXZlbD1yYWlkMSAtLXJhaWQt
ZGV2aWNlcz0yIDxicj4KL2Rldi9tYXBwZXIvc2RiMSAvZGV2L21hcHBlci9zZGMxPGJyPgo8YnI+
CmFuZCByZWFkaW5nIG9uIC9wcm9jL21kc3RhdCBJIGdvdCB0aGlzOjxicj4KPGJyPgpbJmd0Oy4u
Li4uLi4uLi4uLi4uLi4uLi4uXcKgIHJlc3luYyA9wqAgMS4xJSAoNDkyOTc5Mi80NDMxNzU0MjQp
IDxicj4KZmluaXNoPTY3Ny4zbWluIHNwZWVkPTEwNzgySy9zZWM8YnI+Cjxicj4KV2h5IHRoZXJl
IGlzIHNvIGJpZyBkcm9wIG9uIHNwZWVkIGR1cmluZyB0aGUgc3luYz88YnI+Cjxicj4KSSYjMzk7
bSBtaXNzaW5nIHNvbWV0aGluZz88YnI+Cjxicj4KSSYjMzk7bGwgbmVlZCAxMSBob3VycyB0byBz
eW5jIDJ4NTAwR0IgaGRkPyBXaHkgc28gc2xvdz88YnI+Cjxicj4KQmVmb3JlIHRoaXMgSSB0cmll
ZCB0aGUgc2FtZSBvbiBhIG5ld2VyIG1hY2hpbmUgd2l0aCBpNyA4NzAwayBhbmQgMngyVEIgPGJy
PgpXRCBnb2xkIGFuZCBJIGdldCBhIGRyb3Agc3luYyBzcGVlZCBhdCB+MzVNQi9zLjxicj4KPGJy
PgpUaGVyZSBpcyBzb21ldGhpbmcgdGhhdCBJIGNhbiBkbyB0byBpbXByb3ZlIHRoaXM/PGJyPgo8
YnI+ClRoYW5rIHlvdSBpbiBhZHZhbmNlLjxicj4KPGJyPgpBbGVzc2FuZHJvLjxicj4KX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188YnI+CkFsbWFMaW51eCBV
c2VycyBtYWlsaW5nIGxpc3QgLS0gPGEgaHJlZj0ibWFpbHRvOnVzZXJzQGxpc3RzLmFsbWFsaW51
eC5vcmciIHRhcmdldD0iX2JsYW5rIj51c2Vyc0BsaXN0cy5hbG1hbGludXgub3JnPC9hPjxicj4K
VG8gdW5zdWJzY3JpYmUgc2VuZCBhbiBlbWFpbCB0byA8YSBocmVmPSJtYWlsdG86dXNlcnMtbGVh
dmVAbGlzdHMuYWxtYWxpbnV4Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnVzZXJzLWxlYXZlQGxpc3Rz
LmFsbWFsaW51eC5vcmc8L2E+PGJyPgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyIGNsZWFyPSJhbGwi
Pjxicj48c3BhbiBjbGFzcz0iZ21haWxfc2lnbmF0dXJlX3ByZWZpeCI+LS0gPC9zcGFuPjxicj48
ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfc2lnbmF0dXJlIj48ZGl2IGRpcj0ibHRyIj5Kb25h
dGhhbiBXcmlnaHQ8YnI+QWxtYUxpbnV4IEZvdW5kYXRpb248ZGl2Pk1hdHRlcm1vc3Q6wqA8YSBo
cmVmPSJodHRwczovL2NoYXQuYWxtYWxpbnV4Lm9yZy9hbG1hbGludXgvbWVzc2FnZXMvQGpvbmF0
aGFuIiB0YXJnZXQ9Il9ibGFuayI+Y2hhdDwvYT48L2Rpdj48L2Rpdj48L2Rpdj4K
--===============3494385975070880146==--
From alessandro.baggi@gmail.com Wed Apr 10 17:01:26 2024
From: Alessandro Baggi
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Re: Almalinux and integritysetup speed problem
Date: Wed, 10 Apr 2024 17:55:12 +0200
Message-ID:
In-Reply-To:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============8862862012707240873=="
--===============8862862012707240873==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Hi Jonathan,
and thank you for your answer.
I followed the suggestion on the link you purposed but setting=20
speed_limit_min to 200000 does not change anything. The rsync speed is=20
always speed=3D9984K/sec
and
# sysctl dev.raid.speed_limit_min
dev.raid.speed_limit_min =3D 200000
=09
The sync speed limit seems not to be the problem.
(I received another email that says I'm a moderated member and my email=20
need to approved...why this?)
Thank you in advance.
Alessandro.
Il 10/04/24 17:44, Jonathan Wright ha scritto:
> You can adjust the mdadm rebuild rate pretty easily.=C2=A0 By default it's =
> quite slow to avoid causing strain on system resources.
>=20
> See #1 at=20
> https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.htm=
l
>=20
> On Wed, Apr 10, 2024 at 10:36=E2=80=AFAM Alessandro Baggi=20
> > wrote:
>=20
> Hi list,
> I'm trying on a spare machine dm-integrity with mdadm raid1.
> This is an old machine (i7-2600k). I'm using 2x500GB wd caviar black
> SATA3.
>=20
> I'm trying to run some test and see how much performance changes using
> dm-integrity.
>=20
> First I created a raid1 with mdadm and checked the performances and
> writing 50G I got 100 MB/s
>=20
> Then I destroied the md device and on every disk I run:
>=20
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 # integritysetup format --integrity xxhash=
64 /dev/sdb1
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 # integritysetup format --integrity xxhash=
64 /dev/sdc1
>=20
> During this process the performances was good ~95MB/s.
> After this I opened the devices with:
>=20
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 # integritysetup open integrity xxhash64 /=
dev/sdb1 sdb1
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 # integritysetup open integrity xxhash64 /=
dev/sdc1 sdc1
>=20
> and created the mdadm array with:
>=20
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 # mdadm --create /dev/md10 --level=3Draid1=
--raid-devices=3D2
> /dev/mapper/sdb1 /dev/mapper/sdc1
>=20
> and reading on /proc/mdstat I got this:
>=20
> [>....................]=C2=A0 resync =3D=C2=A0 1.1% (4929792/443175424)
> finish=3D677.3min speed=3D10782K/sec
>=20
> Why there is so big drop on speed during the sync?
>=20
> I'm missing something?
>=20
> I'll need 11 hours to sync 2x500GB hdd? Why so slow?
>=20
> Before this I tried the same on a newer machine with i7 8700k and 2x2TB
> WD gold and I get a drop sync speed at ~35MB/s.
>=20
> There is something that I can do to improve this?
>=20
> Thank you in advance.
>=20
> Alessandro.
> _______________________________________________
> AlmaLinux Users mailing list -- users(a)lists.almalinux.org
>
> To unsubscribe send an email to users-leave(a)lists.almalinux.org
>
>=20
>=20
>=20
> --=20
> Jonathan Wright
> AlmaLinux Foundation
> Mattermost: chat
--===============8862862012707240873==--
From jonathan@almalinux.org Wed Apr 10 17:05:10 2024
From: Jonathan Wright
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Re: Almalinux and integritysetup speed problem
Date: Wed, 10 Apr 2024 12:04:20 -0500
Message-ID:
In-Reply-To:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============7933732721521766316=="
--===============7933732721521766316==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Try this:
sysctl -w dev.raid.speed_limit_min=3D200000
sysctl -w dev.raid.speed_limit_max=3D200000
On Wed, Apr 10, 2024 at 12:01=E2=80=AFPM Alessandro Baggi <
alessandro.baggi(a)gmail.com> wrote:
> Hi Jonathan,
> and thank you for your answer.
>
> I followed the suggestion on the link you purposed but setting
> speed_limit_min to 200000 does not change anything. The rsync speed is
> always speed=3D9984K/sec
>
> and
> # sysctl dev.raid.speed_limit_min
> dev.raid.speed_limit_min =3D 200000
>
> The sync speed limit seems not to be the problem.
>
> (I received another email that says I'm a moderated member and my email
> need to approved...why this?)
>
> Thank you in advance.
>
> Alessandro.
>
> Il 10/04/24 17:44, Jonathan Wright ha scritto:
> > You can adjust the mdadm rebuild rate pretty easily. By default it's
> > quite slow to avoid causing strain on system resources.
> >
> > See #1 at
> >
> https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html
> <
> https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html
> >
> >
> > On Wed, Apr 10, 2024 at 10:36=E2=80=AFAM Alessandro Baggi
> > > wrot=
e:
> >
> > Hi list,
> > I'm trying on a spare machine dm-integrity with mdadm raid1.
> > This is an old machine (i7-2600k). I'm using 2x500GB wd caviar black
> > SATA3.
> >
> > I'm trying to run some test and see how much performance changes
> using
> > dm-integrity.
> >
> > First I created a raid1 with mdadm and checked the performances and
> > writing 50G I got 100 MB/s
> >
> > Then I destroied the md device and on every disk I run:
> >
> > # integritysetup format --integrity xxhash64 /dev/sdb1
> > # integritysetup format --integrity xxhash64 /dev/sdc1
> >
> > During this process the performances was good ~95MB/s.
> > After this I opened the devices with:
> >
> > # integritysetup open integrity xxhash64 /dev/sdb1 sdb1
> > # integritysetup open integrity xxhash64 /dev/sdc1 sdc1
> >
> > and created the mdadm array with:
> >
> > # mdadm --create /dev/md10 --level=3Draid1 --raid-devices=3D2
> > /dev/mapper/sdb1 /dev/mapper/sdc1
> >
> > and reading on /proc/mdstat I got this:
> >
> > [>....................] resync =3D 1.1% (4929792/443175424)
> > finish=3D677.3min speed=3D10782K/sec
> >
> > Why there is so big drop on speed during the sync?
> >
> > I'm missing something?
> >
> > I'll need 11 hours to sync 2x500GB hdd? Why so slow?
> >
> > Before this I tried the same on a newer machine with i7 8700k and
> 2x2TB
> > WD gold and I get a drop sync speed at ~35MB/s.
> >
> > There is something that I can do to improve this?
> >
> > Thank you in advance.
> >
> > Alessandro.
> > _______________________________________________
> > AlmaLinux Users mailing list -- users(a)lists.almalinux.org
> >
> > To unsubscribe send an email to users-leave(a)lists.almalinux.org
> >
> >
> >
> >
> > --
> > Jonathan Wright
> > AlmaLinux Foundation
> > Mattermost: chat <
> https://chat.almalinux.org/almalinux/messages/@jonathan>
> _______________________________________________
> AlmaLinux Users mailing list -- users(a)lists.almalinux.org
> To unsubscribe send an email to users-leave(a)lists.almalinux.org
>
--=20
Jonathan Wright
AlmaLinux Foundation
Mattermost: chat
--===============7933732721521766316==
Content-Type: text/html
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
MIME-Version: 1.0
PGRpdiBkaXI9Imx0ciI+VHJ5IHRoaXM6PGJyPjxicj48ZGl2PnN5c2N0bCAtdyBkZXYucmFpZC5z
cGVlZF9saW1pdF9taW49MjAwMDAwPC9kaXY+PGRpdj5zeXNjdGwgLXcgZGV2LnJhaWQuc3BlZWRf
bGltaXRfbWF4PTIwMDAwMDxicj48L2Rpdj48L2Rpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVv
dGUiPjxkaXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9hdHRyIj5PbiBXZWQsIEFwciAxMCwgMjAy
NCBhdCAxMjowMeKAr1BNIEFsZXNzYW5kcm8gQmFnZ2kgJmx0OzxhIGhyZWY9Im1haWx0bzphbGVz
c2FuZHJvLmJhZ2dpQGdtYWlsLmNvbSI+YWxlc3NhbmRyby5iYWdnaUBnbWFpbC5jb208L2E+Jmd0
OyB3cm90ZTo8YnI+PC9kaXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0i
bWFyZ2luOjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIw
NCwyMDQpO3BhZGRpbmctbGVmdDoxZXgiPkhpIEpvbmF0aGFuLDxicj4KYW5kIHRoYW5rIHlvdSBm
b3IgeW91ciBhbnN3ZXIuPGJyPgo8YnI+CkkgZm9sbG93ZWQgdGhlIHN1Z2dlc3Rpb24gb24gdGhl
IGxpbmsgeW91IHB1cnBvc2VkIGJ1dCBzZXR0aW5nIDxicj4Kc3BlZWRfbGltaXRfbWluIHRvIDIw
MDAwMCBkb2VzIG5vdCBjaGFuZ2UgYW55dGhpbmcuIFRoZSByc3luYyBzcGVlZCBpcyA8YnI+CmFs
d2F5cyBzcGVlZD05OTg0Sy9zZWM8YnI+Cjxicj4KYW5kPGJyPgrCoCDCoCDCoCDCoCAjIHN5c2N0
bCBkZXYucmFpZC5zcGVlZF9saW1pdF9taW48YnI+CsKgIMKgIMKgIMKgIMKgIGRldi5yYWlkLnNw
ZWVkX2xpbWl0X21pbiA9IDIwMDAwMDxicj4KPGJyPgpUaGUgc3luYyBzcGVlZCBsaW1pdCBzZWVt
cyBub3QgdG8gYmUgdGhlIHByb2JsZW0uPGJyPgo8YnI+CihJIHJlY2VpdmVkIGFub3RoZXIgZW1h
aWwgdGhhdCBzYXlzIEkmIzM5O20gYSBtb2RlcmF0ZWQgbWVtYmVyIGFuZCBteSBlbWFpbCA8YnI+
Cm5lZWQgdG8gYXBwcm92ZWQuLi53aHkgdGhpcz8pPGJyPgo8YnI+ClRoYW5rIHlvdSBpbiBhZHZh
bmNlLjxicj4KPGJyPgpBbGVzc2FuZHJvLjxicj4KPGJyPgpJbCAxMC8wNC8yNCAxNzo0NCwgSm9u
YXRoYW4gV3JpZ2h0IGhhIHNjcml0dG86PGJyPgomZ3Q7IFlvdSBjYW4gYWRqdXN0IHRoZSBtZGFk
bSByZWJ1aWxkIHJhdGUgcHJldHR5IGVhc2lseS7CoCBCeSBkZWZhdWx0IGl0JiMzOTtzIDxicj4K
Jmd0OyBxdWl0ZSBzbG93IHRvIGF2b2lkIGNhdXNpbmcgc3RyYWluIG9uIHN5c3RlbSByZXNvdXJj
ZXMuPGJyPgomZ3Q7IDxicj4KJmd0OyBTZWUgIzEgYXQgPGJyPgomZ3Q7IDxhIGhyZWY9Imh0dHBz
Oi8vd3d3LmN5YmVyY2l0aS5iaXovdGlwcy9saW51eC1yYWlkLWluY3JlYXNlLXJlc3luYy1yZWJ1
aWxkLXNwZWVkLmh0bWwiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
d3d3LmN5YmVyY2l0aS5iaXovdGlwcy9saW51eC1yYWlkLWluY3JlYXNlLXJlc3luYy1yZWJ1aWxk
LXNwZWVkLmh0bWw8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3d3dy5jeWJlcmNpdGkuYml6L3Rp
cHMvbGludXgtcmFpZC1pbmNyZWFzZS1yZXN5bmMtcmVidWlsZC1zcGVlZC5odG1sIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5jeWJlcmNpdGkuYml6L3RpcHMv
bGludXgtcmFpZC1pbmNyZWFzZS1yZXN5bmMtcmVidWlsZC1zcGVlZC5odG1sPC9hPiZndDs8YnI+
CiZndDsgPGJyPgomZ3Q7IE9uIFdlZCwgQXByIDEwLCAyMDI0IGF0IDEwOjM24oCvQU0gQWxlc3Nh
bmRybyBCYWdnaSA8YnI+CiZndDsgJmx0OzxhIGhyZWY9Im1haWx0bzphbGVzc2FuZHJvLmJhZ2dp
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmFsZXNzYW5kcm8uYmFnZ2lAZ21haWwuY29tPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzphbGVzc2FuZHJvLmJhZ2dpQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPmFsZXNzYW5kcm8uYmFnZ2lAZ21haWwuY29tPC9hPiZndDsmZ3Q7IHdy
b3RlOjxicj4KJmd0OyA8YnI+CiZndDvCoCDCoCDCoEhpIGxpc3QsPGJyPgomZ3Q7wqAgwqAgwqBJ
JiMzOTttIHRyeWluZyBvbiBhIHNwYXJlIG1hY2hpbmUgZG0taW50ZWdyaXR5IHdpdGggbWRhZG0g
cmFpZDEuPGJyPgomZ3Q7wqAgwqAgwqBUaGlzIGlzIGFuIG9sZCBtYWNoaW5lIChpNy0yNjAwayku
IEkmIzM5O20gdXNpbmcgMng1MDBHQiB3ZCBjYXZpYXIgYmxhY2s8YnI+CiZndDvCoCDCoCDCoFNB
VEEzLjxicj4KJmd0OyA8YnI+CiZndDvCoCDCoCDCoEkmIzM5O20gdHJ5aW5nIHRvIHJ1biBzb21l
IHRlc3QgYW5kIHNlZSBob3cgbXVjaCBwZXJmb3JtYW5jZSBjaGFuZ2VzIHVzaW5nPGJyPgomZ3Q7
wqAgwqAgwqBkbS1pbnRlZ3JpdHkuPGJyPgomZ3Q7IDxicj4KJmd0O8KgIMKgIMKgRmlyc3QgSSBj
cmVhdGVkIGEgcmFpZDEgd2l0aCBtZGFkbSBhbmQgY2hlY2tlZCB0aGUgcGVyZm9ybWFuY2VzIGFu
ZDxicj4KJmd0O8KgIMKgIMKgd3JpdGluZyA1MEcgSSBnb3QgMTAwIE1CL3M8YnI+CiZndDsgPGJy
PgomZ3Q7wqAgwqAgwqBUaGVuIEkgZGVzdHJvaWVkIHRoZSBtZCBkZXZpY2UgYW5kIG9uIGV2ZXJ5
IGRpc2sgSSBydW46PGJyPgomZ3Q7IDxicj4KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgICMgaW50
ZWdyaXR5c2V0dXAgZm9ybWF0IC0taW50ZWdyaXR5IHh4aGFzaDY0IC9kZXYvc2RiMTxicj4KJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgICMgaW50ZWdyaXR5c2V0dXAgZm9ybWF0IC0taW50ZWdyaXR5
IHh4aGFzaDY0IC9kZXYvc2RjMTxicj4KJmd0OyA8YnI+CiZndDvCoCDCoCDCoER1cmluZyB0aGlz
IHByb2Nlc3MgdGhlIHBlcmZvcm1hbmNlcyB3YXMgZ29vZCB+OTVNQi9zLjxicj4KJmd0O8KgIMKg
IMKgQWZ0ZXIgdGhpcyBJIG9wZW5lZCB0aGUgZGV2aWNlcyB3aXRoOjxicj4KJmd0OyA8YnI+CiZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCAjIGludGVncml0eXNldHVwIG9wZW4gaW50ZWdyaXR5IHh4
aGFzaDY0IC9kZXYvc2RiMSBzZGIxPGJyPgomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAgIyBpbnRl
Z3JpdHlzZXR1cCBvcGVuIGludGVncml0eSB4eGhhc2g2NCAvZGV2L3NkYzEgc2RjMTxicj4KJmd0
OyA8YnI+CiZndDvCoCDCoCDCoGFuZCBjcmVhdGVkIHRoZSBtZGFkbSBhcnJheSB3aXRoOjxicj4K
Jmd0OyA8YnI+CiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCAjIG1kYWRtIC0tY3JlYXRlIC9kZXYv
bWQxMCAtLWxldmVsPXJhaWQxIC0tcmFpZC1kZXZpY2VzPTI8YnI+CiZndDvCoCDCoCDCoC9kZXYv
bWFwcGVyL3NkYjEgL2Rldi9tYXBwZXIvc2RjMTxicj4KJmd0OyA8YnI+CiZndDvCoCDCoCDCoGFu
ZCByZWFkaW5nIG9uIC9wcm9jL21kc3RhdCBJIGdvdCB0aGlzOjxicj4KJmd0OyA8YnI+CiZndDvC
oCDCoCDCoFsmZ3Q7Li4uLi4uLi4uLi4uLi4uLi4uLi5dwqAgcmVzeW5jID3CoCAxLjElICg0OTI5
NzkyLzQ0MzE3NTQyNCk8YnI+CiZndDvCoCDCoCDCoGZpbmlzaD02NzcuM21pbiBzcGVlZD0xMDc4
Mksvc2VjPGJyPgomZ3Q7IDxicj4KJmd0O8KgIMKgIMKgV2h5IHRoZXJlIGlzIHNvIGJpZyBkcm9w
IG9uIHNwZWVkIGR1cmluZyB0aGUgc3luYz88YnI+CiZndDsgPGJyPgomZ3Q7wqAgwqAgwqBJJiMz
OTttIG1pc3Npbmcgc29tZXRoaW5nPzxicj4KJmd0OyA8YnI+CiZndDvCoCDCoCDCoEkmIzM5O2xs
IG5lZWQgMTEgaG91cnMgdG8gc3luYyAyeDUwMEdCIGhkZD8gV2h5IHNvIHNsb3c/PGJyPgomZ3Q7
IDxicj4KJmd0O8KgIMKgIMKgQmVmb3JlIHRoaXMgSSB0cmllZCB0aGUgc2FtZSBvbiBhIG5ld2Vy
IG1hY2hpbmUgd2l0aCBpNyA4NzAwayBhbmQgMngyVEI8YnI+CiZndDvCoCDCoCDCoFdEIGdvbGQg
YW5kIEkgZ2V0IGEgZHJvcCBzeW5jIHNwZWVkIGF0IH4zNU1CL3MuPGJyPgomZ3Q7IDxicj4KJmd0
O8KgIMKgIMKgVGhlcmUgaXMgc29tZXRoaW5nIHRoYXQgSSBjYW4gZG8gdG8gaW1wcm92ZSB0aGlz
Pzxicj4KJmd0OyA8YnI+CiZndDvCoCDCoCDCoFRoYW5rIHlvdSBpbiBhZHZhbmNlLjxicj4KJmd0
OyA8YnI+CiZndDvCoCDCoCDCoEFsZXNzYW5kcm8uPGJyPgomZ3Q7wqAgwqAgwqBfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4KJmd0O8KgIMKgIMKgQWxt
YUxpbnV4IFVzZXJzIG1haWxpbmcgbGlzdCAtLSA8YSBocmVmPSJtYWlsdG86dXNlcnNAbGlzdHMu
YWxtYWxpbnV4Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnVzZXJzQGxpc3RzLmFsbWFsaW51eC5vcmc8
L2E+PGJyPgomZ3Q7wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzp1c2Vyc0BsaXN0
cy5hbG1hbGludXgub3JnIiB0YXJnZXQ9Il9ibGFuayI+dXNlcnNAbGlzdHMuYWxtYWxpbnV4Lm9y
ZzwvYT4mZ3Q7PGJyPgomZ3Q7wqAgwqAgwqBUbyB1bnN1YnNjcmliZSBzZW5kIGFuIGVtYWlsIHRv
IDxhIGhyZWY9Im1haWx0bzp1c2Vycy1sZWF2ZUBsaXN0cy5hbG1hbGludXgub3JnIiB0YXJnZXQ9
Il9ibGFuayI+dXNlcnMtbGVhdmVAbGlzdHMuYWxtYWxpbnV4Lm9yZzwvYT48YnI+CiZndDvCoCDC
oCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnVzZXJzLWxlYXZlQGxpc3RzLmFsbWFsaW51
eC5vcmciIHRhcmdldD0iX2JsYW5rIj51c2Vycy1sZWF2ZUBsaXN0cy5hbG1hbGludXgub3JnPC9h
PiZndDs8YnI+CiZndDsgPGJyPgomZ3Q7IDxicj4KJmd0OyA8YnI+CiZndDsgLS0gPGJyPgomZ3Q7
IEpvbmF0aGFuIFdyaWdodDxicj4KJmd0OyBBbG1hTGludXggRm91bmRhdGlvbjxicj4KJmd0OyBN
YXR0ZXJtb3N0OiBjaGF0ICZsdDs8YSBocmVmPSJodHRwczovL2NoYXQuYWxtYWxpbnV4Lm9yZy9h
bG1hbGludXgvbWVzc2FnZXMvQGpvbmF0aGFuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2NoYXQuYWxtYWxpbnV4Lm9yZy9hbG1hbGludXgvbWVzc2FnZXMvQGpvbmF0
aGFuPC9hPiZndDs8YnI+Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fPGJyPgpBbG1hTGludXggVXNlcnMgbWFpbGluZyBsaXN0IC0tIDxhIGhyZWY9Im1haWx0
bzp1c2Vyc0BsaXN0cy5hbG1hbGludXgub3JnIiB0YXJnZXQ9Il9ibGFuayI+dXNlcnNAbGlzdHMu
YWxtYWxpbnV4Lm9yZzwvYT48YnI+ClRvIHVuc3Vic2NyaWJlIHNlbmQgYW4gZW1haWwgdG8gPGEg
aHJlZj0ibWFpbHRvOnVzZXJzLWxlYXZlQGxpc3RzLmFsbWFsaW51eC5vcmciIHRhcmdldD0iX2Js
YW5rIj51c2Vycy1sZWF2ZUBsaXN0cy5hbG1hbGludXgub3JnPC9hPjxicj4KPC9ibG9ja3F1b3Rl
PjwvZGl2PjxiciBjbGVhcj0iYWxsIj48YnI+PHNwYW4gY2xhc3M9ImdtYWlsX3NpZ25hdHVyZV9w
cmVmaXgiPi0tIDwvc3Bhbj48YnI+PGRpdiBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX3NpZ25hdHVy
ZSI+PGRpdiBkaXI9Imx0ciI+Sm9uYXRoYW4gV3JpZ2h0PGJyPkFsbWFMaW51eCBGb3VuZGF0aW9u
PGRpdj5NYXR0ZXJtb3N0OsKgPGEgaHJlZj0iaHR0cHM6Ly9jaGF0LmFsbWFsaW51eC5vcmcvYWxt
YWxpbnV4L21lc3NhZ2VzL0Bqb25hdGhhbiIgdGFyZ2V0PSJfYmxhbmsiPmNoYXQ8L2E+PC9kaXY+
PC9kaXY+PC9kaXY+Cg==
--===============7933732721521766316==--
From carles.acosta.silva@gmail.com Thu Apr 18 22:11:57 2024
From: Carles Acosta Silva
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Autofs: too many open files issue
Date: Wed, 17 Apr 2024 15:48:38 +0000
Message-ID:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============0634096980526507278=="
--===============0634096980526507278==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Hi,
We are running AlmaLinux 9.3 on several nodes that normally use autofs to
mount different kinds of remote directories. Since the last week, we are
seeing these errors:
automount[2150866]: open_pipe:161: failed to open pipe: Too many open files
And automount stops working. It does not seem that any user has arrived at
the maximum number of open files.
Do you have any idea where we can look to find a solution?
Thank you in advance.
Best regards,
Carles
--===============0634096980526507278==
Content-Type: text/html
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
MIME-Version: 1.0
PGRpdiBkaXI9Imx0ciI+SGksPGRpdj48YnI+V2UgYXJlIHJ1bm5pbmcgQWxtYUxpbnV4IDkuMyBv
biBzZXZlcmFsIG5vZGVzIHRoYXQgbm9ybWFsbHkgdXNlIGF1dG9mcyB0byBtb3VudCBkaWZmZXJl
bnQga2luZHMgb2YgcmVtb3RlIGRpcmVjdG9yaWVzLiBTaW5jZSB0aGUgbGFzdCB3ZWVrLCB3ZSBh
cmUgc2VlaW5nIHRoZXNlIGVycm9yczo8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PmF1dG9tb3Vu
dFsyMTUwODY2XTogb3Blbl9waXBlOjE2MTogZmFpbGVkIHRvIG9wZW4gcGlwZTogVG9vIG1hbnkg
b3BlbiBmaWxlczxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkFuZCBhdXRvbW91bnQgc3Rv
cHMgd29ya2luZy4gSXQgZG9lcyBub3Qgc2VlbSB0aGF0IGFueSB1c2VyIGhhcyBhcnJpdmVkIGF0
IHRoZSBtYXhpbXVtIG51bWJlciBvZiBvcGVuIGZpbGVzLjwvZGl2PjxkaXY+PGJyPjwvZGl2Pjxk
aXY+RG8geW91IGhhdmUgYW55IGlkZWEgd2hlcmUgd2UgY2FuIGxvb2sgdG8gZmluZCBhIHNvbHV0
aW9uPzwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+VGhhbmsgeW91IGluIGFkdmFuY2UuPC9kaXY+
PGRpdj48YnI+PC9kaXY+PGRpdj5CZXN0IHJlZ2FyZHMsPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRp
dj5DYXJsZXM8L2Rpdj48L2Rpdj4K
--===============0634096980526507278==--
From alessandro.baggi@gmail.com Sat Apr 20 00:01:11 2024
From: Alessandro Baggi
To: users@lists.almalinux.org
Subject: [AlmaLinux Users] Re: Autofs: too many open files issue
Date: Fri, 19 Apr 2024 09:05:22 +0200
Message-ID: <38ccba93-190f-4e8d-b140-3565958268d7@gmail.com>
In-Reply-To:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="===============7744704104559678183=="
--===============7744704104559678183==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Il 17/04/24 17:48, Carles Acosta Silva ha scritto:
> Hi,
>
> We are running AlmaLinux 9.3 on several nodes that normally use autofs
> to mount different kinds of remote directories. Since the last week, we
> are seeing these errors:
>
> automount[2150866]: open_pipe:161: failed to open pipe: Too many open files
>
> And automount stops working. It does not seem that any user has arrived
> at the maximum number of open files.
>
> Do you have any idea where we can look to find a solution?
>
> Thank you in advance.
>
> Best regards,
>
> Carles
>
> _______________________________________________
> AlmaLinux Users mailing list -- users(a)lists.almalinux.org
> To unsubscribe send an email to users-leave(a)lists.almalinux.org
Hi Carles,
You should check how many files are opened on the system and check what
is the maximum number limit of open file for the system.
You should check also what is the user that receive the "too many open
files" error (I suppose root?) and check how many files can be opened
for that specified user (ulimit -n could help).
You could run also lsof to check how many files are opened and what is
opened. Could you consider something program that does not close a
resource on file?
I don't know how works automount but maybe some resource is not umounted?
Hope that helps.
Alessandro.
--===============7744704104559678183==--