|
About Technical Help
|
|
0
|
2880
|
October 3, 2017
|
|
Is it possible to schedule uptime?
|
|
0
|
15
|
January 6, 2026
|
|
Rocky or RHEL for On-demand Instance
|
|
0
|
22
|
December 14, 2025
|
|
Possible to upgrade CUDA in the Lambda Stack?
|
|
2
|
70
|
December 2, 2025
|
|
Customer from India
|
|
2
|
239
|
November 27, 2025
|
|
With all due respect: Lambda - CHECK YOUR EMAILS
|
|
1
|
177
|
November 18, 2025
|
|
Initial setup advice
|
|
0
|
53
|
November 4, 2025
|
|
Rent a windows machine
|
|
1
|
72
|
October 14, 2025
|
|
Unable ssh via laptop but can ssh via other server and connect to the Jupyter via browser
|
|
1
|
78
|
October 14, 2025
|
|
Nvidia Profiling support
|
|
1
|
122
|
October 1, 2025
|
|
Hermes3-405B API not responding
|
|
1
|
96
|
October 1, 2025
|
|
GH200 PyTorch installation failure
|
|
1
|
189
|
September 30, 2025
|
|
Unable to create new instance
|
|
1
|
75
|
September 22, 2025
|
|
Booting takes long time and then "alert" status for gpu_1x_h100_pcie
|
|
4
|
2617
|
September 15, 2025
|
|
PS only reaches 28v when ramp is applied
|
|
0
|
38
|
September 9, 2025
|
|
Using docker context with Lambda VM
|
|
1
|
86
|
September 5, 2025
|
|
Cheapest way to upload data into the filesystem with slow upload bandwidth
|
|
3
|
279
|
August 26, 2025
|
|
Is there a list of instance types by region somewhere?
|
|
5
|
133
|
August 26, 2025
|
|
No display on Lambda Vector Pro
|
|
2
|
101
|
August 20, 2025
|
|
2 identical Lambda Quads; one can't connect to display; one can
|
|
1
|
76
|
August 20, 2025
|
|
Lambda Vector One Ram Upgrade
|
|
1
|
73
|
August 20, 2025
|
|
Unable to login with google
|
|
1
|
74
|
August 20, 2025
|
|
Is it possible to persist the workspace configuration?
|
|
2
|
129
|
August 13, 2025
|
|
Adding a host based adapter card to a scalar server
|
|
0
|
40
|
August 1, 2025
|
|
Training Microsoft's MatterGen model doesn't use GPUs
|
|
2
|
90
|
July 10, 2025
|
|
Tool calling in Lambda Inference API
|
|
6
|
318
|
June 24, 2025
|
|
No direct startup script support?
|
|
2
|
164
|
June 15, 2025
|
|
Inference API Timeout
|
|
2
|
125
|
June 6, 2025
|
|
Inferrence Streaming Changed
|
|
0
|
62
|
May 13, 2025
|
|
Downloading speed is slow on H100, only ~70MB/s
|
|
4
|
1692
|
June 8, 2023
|