-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a tldr version of the tmt
man page
#968
Comments
I would prefer to have there commands for debugging like:
|
@psss I presume we want to maintain tldr .md file(s) within this repo and create pull requests to github.com/tldr-pages/tldr?
Wouldn't that be basically the same as |
Yes, agreed. And because it will be already there why not adding the
Exactly, that content would be the same, could be useful for those who would like to get some quick start from command line but don't have (or don't want to have) the
I would slightly prefer a separate subcommand instead of the |
Hi @psss, I will look into this issue. Thanks. |
Hi @martinhoyer, please let me know what |
Thanks @idorax! Once we have the English version finalized, I'll ping you. Can be done in separate PR later as well :) |
No problem! @martinhoyer , I'll take a look at the tldr/CONTRIBUTING.md#directory-structure, thanks for sharing the doc! |
@martinhoyer, what about including this in |
Yep, have on the list. Thanks for reminder. 1.35 sounds good. |
Here the # tmt
> Runs tests in containers.
> More information: <https://tmt.readthedocs.io/en/stable/>.
- Make project tests manageable by `tmt`:
`tmt init`
- Describe how to run tests:
`tmt tests create`
- Run all tests in a container or VM:
`tmt run`
- Show last failed test:
`???`
- Show last log:
`tmt ???`
- Run faled test again:
`tmt ???`
- Login to last executed container or VM:
`tmt ???` |
Thanks, much appreciated. :) |
What I use often, FWIW:
|
I like @happz's favorites above, perhaps one thought: What about showing inspirative examples for each command, e.g. not enumerating all
In the examples I'd suggest using long options to make the examples as self-explanatory as possible (users will find the short versions soon/easily):
I'd suggest to also include at least one
Or maybe both? |
We can reference subcommands: https://github.com/tldr-pages/tldr/blob/main/CONTRIBUTING.md#subcommands |
How do you actually run tests and see the results? I tried example test, it failed, and the output is not helpful. It is so verbose that looks like debug info, and that debug info still doesn't contain info about what went wrong. Which test is failed, why, etc. $ tmt run --all provision --how container
/var/tmp/tmt/run-011
/default/plan
discover
how: fmf
directory: /data/s/gitlab-ai-gateway/xxx
summary: 1 test selected
provision
queued provision.provision task #1: default-0
provision.provision task #1: default-0
how: container
multihost name: default-0
arch: x86_64
distro: Fedora Linux 40 (Container Image)
summary: 1 guest provisioned
prepare
queued push task #1: push to default-0
push task #1: push to default-0
queued prepare task #1: requires on default-0
prepare task #1: requires on default-0
how: install
summary: Install required packages
name: requires
where: default-0
package: /usr/bin/flock
queued pull task #1: pull from default-0
pull task #1: pull from default-0
summary: 1 preparation applied
execute
queued execute task #1: default-0 on default-0
execute task #1: default-0 on default-0
how: tmt
progress:
summary: 1 test executed
report
how: display
summary: 1 error
finish
container: stopped
container: removed
container: network removed
summary: 0 tasks completed
total: 1 error |
If you're interested in individual test results then it would be
Yeah, I agree that some of the details could/should be omitted from the default output. I kicked off #2534 where I would like to cover the |
From the 2.7kB of output, these are the relevant lines. report
how: display
order: 50
errr /test01
output.txt: /var/tmp/tmt/run-017/default/plan/execute/data/guest/default-0/test01-1/output.txt
content:
++ mktemp
+ tmp=/tmp/tmp.huNW3sQbKy
+ tmt --help
./test.sh: line 4: tmt: command not found
summary: 1 error This is the test that is created with $ tmt tests create test01
Test template (shell or beakerlib): shell
Test directory '/data/s/gitlab-ai-gateway/xxx/test01' created.
Test metadata '/data/s/gitlab-ai-gateway/xxx/test01/main.fmf' created.
Test script '/data/s/gitlab-ai-gateway/xxx/test01/test.sh' created. It could be okay if file comment described how to fix it, that is "tmt: command not found". Still in the context of How to view the output of the last command? (I know I almost feel like creating Is it possible to custom format |
(Not responding to all points)
Now that's a very interesting idea. No, it is not possible to custom format the current tmt output, at least not easily. What you observe is rather logging (goes to Adding TAP support sounds very much doable, IIUIC with very limited experience, it's a plain-text interface, a stream of simple lines. Is it something that's consumed through pipes, or is it common to exchange the output via files too? Would you be able to share more about how TAP support would help integrating tmt with your workflows? We could go the easy way, and add a new |
Not specifically TAP, but copy pasting failed test run for a bug report is much easier when the results are represented in concise format rather than in 2kb of text.
For output customization example, Vale .md validator uses Go templates.
The template receives array of these objects https://vale.sh/docs/integrations/guide/#--outputjson {
"index.md": [
{
"Action": {
"Name": "",
"Params": null
},
"Check": "write-good.Passive",
"Description": "",
"Line": 6,
"Link": "",
"Message": "'was created' may be passive voice. Use active voice if you can.",
"Severity": "warning",
"Span": [
59,
69
],
"Match": "was created"
},
]
} The array could probably be JSON Lines generator for streaming processing, or may be it is. The only problem with Vale is that if you need to output results to the screen and save report, like for CI/CD postprocessing, you need to run Vale twice. |
I see. Is it something that should be streamed as tests progressed, or is it fine to dump this kind of output in the
This seems like a perfect match for a |
Unbuffered streaming tests results is necessary for onscreen reports and for troubleshooting problems when the whole test harness crashes. CI/CD is okay with post-processing steps, because reports are being uploaded as artifacts for further processing. |
@idorax, are you still up for the translation? ;) |
Sure, please assign the related task to me, I'm glad to help to fix it :-) |
Hi @martinhoyer, are the files which will be translated located in the link? |
@idorax Awesome! btw, in case you're not aware tmt has a matrix room/channel on chat.fedoraproject :) https://matrix.to/#/#tmt:fedoraproject.org |
Will do. Thanks! |
Would be nice to add
tmt
to thetldr
project:Also, what about having a
tmt tldr
subcommand as well?The text was updated successfully, but these errors were encountered: