157 comments found.
We are facing an issue where, when the AI generates a long response, it is unable to speak the full answer. Sometimes it skips parts, and in some cases, no speech is generated at all — only the text response appears.
This issue is occurring in both voice mode and when using the avatar.
Could you please help us identify the cause and advise on how this can be resolved?
this is urjent Dear
Hello,
Email me some samples of text. It should be splitted in small chunks, but I need to check it.
Also please use the support form for support queries.
Regards
Hi, I want to use the OpenAI GPT-5.2 model, but I am facing an issue after changing it. Could you please guide me on how to switch to it properly? Apart from changing it in the Settings, is there anything else that needs to be updated or configured?
Hello,
I have already replied. No, it is necessary to add it only in the Settings.
Regards
Hello Support Team,
I need your assistance with two issues related to the LiveSmart AI system.
1) Urdu Text-to-Speech Pronunciation Issue
I have selected Urdu as the language; however, when the AI performs text-to-speech, the pronunciation does not sound native and appears to have an English accent.
I tested the same Urdu content using ElevenLabs, where the pronunciation, voice quality, and overall output are accurate and natural.
Could you please guide me on the following:
How can I enable or configure a proper native Urdu voice in LiveSmart AI?
Is it possible to fully integrate or configure ElevenLabs for improved Urdu pronunciation and voice quality?
2) HR Training Document – Strictly Document-Based Responses
I have an HR training document and want the AI to generate responses only from this document. If an answer is not available in the document, the AI should not generate any external or general response.
I would like to confirm:
If I paste the complete document into Room Settings → Chat Settings, will the AI reliably use it as the sole knowledge source?
Or is there a recommended configuration or method to enforce strict document-based responses?
I would appreciate your guidance on the correct configuration for both scenarios.
Thank you for your support.
Hello,
1. Hm, not sure about this. You can send email to support@heygen.com about your problem – that you are not happy with the provided Urdu voice from HeyGen and how to add a custom one or choose another from ElevenLabs. Then if you can generate own voice or get one from HeyGen, from your LiveSmart dashboard in the field add the custom voice ID.
2. You can do this. Go to your OpenAI dashboard and create there an assistant. Upload your documents there and in the system field state your prompts – to not answer other questions, to keep strictly instructions in the documents, etc. Then go to your LiveSmart dashboard -> Advanced and there you will see drop-downs with available assistants – choose the one you trained. This will do the trick.
Regards
Thank you for your response.
I am currently using the D-ID avatar and have the API key, but I am unable to find the option to add a custom voice. I would appreciate your guidance on this.
Hi,
There is no possibility to add a custom voice for D-ID. I am checking though.
Regards
Thank you for the update. Could you please suggest an solution? This is needed for hospital staff training.
I will check in a reasonable timeframe. Also I am exploring additional options, i.e. using ElevenLabs voices and some additional avatar.
Can I ask you to check Audio only avatars and choose an ElevenLab voice?
Can we use an ElevenLabs voice with project? If not, please suggest an alternative, as I need to finalize the project.
Did you send a request to HeyGen about the Urdu voice is not a native one? What did they respond? Can we move this discussion over emails. Please use the contact/support form.
If this voice can be integrated with D-ID, it would be ideal, as we would like to use both audio and an AI avatar within D-ID.
I have just sent you an email back.
“Hello, I want to train the AI with Medical and HR-related data. Do I need to go to the “Room Settings” and add the data in the available fields there?”
Hello,
Please get familiar on how to train the Avatar from this FAQ page.
In short – yes, you can add the prompt in AI Avatars -> Room Settings -> Chat Settings. This is the quick way. You can also train an assistant in the OpenAI dashboard. You can upload multiple documents and add much more info there. Then you can assign this assistant to the avatar. And the last way, it is much more complex, you can add some custom functions as is for example the Weather forecast or the YouTube playlist. Check the documentation please.
Email me if you have more questions or concerns.
Regards
Thanks
Hi. I’m from Kazakhstan. I tested the system as a user and set the language to Russian. The system says “Country, region, or territory not supported.”
How can I resolve this issue?
Hi, email me please a screenshot where it is saying this. It should support Russian voices and locales.
I sent you screenshots by email.
I sent you the test and admin account details by email.
I have emailed you back.
Hello! I need additional help and advice. I have sent you the information by email. Thank you for your prompt reply.
Hello,
I have just repliled.
Regards
Hello. I have a suggestion to improve the AI avatar. I sent you an email, please read it.
Hello,
Sure will check and come back to you!
Thank you
I sent an email earlier to support and haven’t received a reply yet. Could you please update me when possible?
I have sent you a couple of hours an email back. Can you please check it? By the way the email is not a support one.
Hello. I sent you an email, please reply.
Hello,
Sorry for the late reply, I have sent back an email.
Regards
Hello Team,
I have purchased both the application and all required API plans, and the APIs are configured from our side.
Currently:
Audio AI chatbot is working fine
AI Video Avatar is not working
Room ID is not being created
Additionally, we plan to use the AI Video Avatar for HR training, and the D-ID (D10) API has already been purchased and integrated. We may be missing some configuration steps.
We would appreciate your guidance, as we need to present a demo to management.
Please advise
Hello,
I have replied to your email, check it please.
Regards
have already purchased the D-ID AI Avatar API
Do I still need to purchase the HeyGen API and provide its token? Please clarify if D-ID is supported on your platform or if HeyGen is mandatory for AI avatar integration.
I have also submitted other requests to support besides this one.
Hello,
No, you can go only with D-ID, HeyGen is not required. I have replied to your email, please check it.
Regards
thanks Dear
Do you plan to implement a memory to save the chat conversation?
Hello,
Actually you have a chat history enabled and you can see it in your dashboard -> Logs -> Chat History.
Regards
Let me elaborate. I tested this scenario on the chat: I told the avatar I am 42 years old, then asked what’s my age?. She said she doesn’t have such information. There is no memory implemented
Hello,
I am updating the PHP core, so it can handle newer models with context. The release will be available in a couple of days. Meanwhile you can check with gpt-3.5-turbo for example to keep the context.
Best
It is updated to latest models, you can test it here. After some tests, I will prepare the release.
Thank you
Thank you for mentioning. Can you please check again, I missed one thing, but now should be ok.
ًWorking fine now. You’re so professional. Can you push this update?
Just consider these improvements in the future updates:
- There is a noticeable delay between the question and the response.
- When switching between languages during live chat, there are some errors, like the mic toggling randomly and the conversation not stable.
- Is there any way to change the Arabic voice?, the tone is very robootic and not sure it’s retrieved from which tool
Hi,
- it depends on the server speed. Actually my demo server is not one of the best. Is there a delay on your end? When is the delay, between your question and the subtitles, or between the subtitles and the avatar speaking?
- will check this.
- Login to the dashboard and choose an Arabic voice, there quite many. Make sure the Language you speak option is too Arabic, so the avatar understands better your speech.
Regards
- The delay between my question and the response.
- I tried to change the language from the dashboard, creating avatar with arabic language and to change from live session, all voices are robotic. I am a native arabic speaker. I guess it’s retrieved from heygen not chatgpt as the arabic of chatgpt is better
Yes, these are voices provided by HeyGen and ElevenLabs. I counted 27 Arabic voices there, do all of them sounds robotic (unnatural)?
yeah, it’s so clear for any arabic native speaker
I see. You can contact HeyGen support at support@heygen.com and ask them if how they can provide a custom voice with Arabic voice. Then it should appear in your voice list. Other option is to file a question here.
Thanks, I will check that.
When you push the next release with the last updates?
I am imppementing a new feature – analyzing an image input. You can check it too – you can ask the avatar something like “what is this image” and paste a link to the image.
After the tests are done, I will release it, most probably early in the coming week.
Best
Awesome
Hello, can you confirm if
1: Can this LiveSmart AI Video – Smart Video Avatars with ChatGPT do recording/automatic recording as a session starts.
2: When is recording uploaded to server, at the end of the session with AI?
3: Is recording uploaded to server automatically?
Hello,
Thank you for your interest in LiveSmart!
1. Yes, it can do recordings, but it is not automatically, you need to start is manually upon meeting start. There is an option to enable recording, go to Rooms -> AI Avatars -> Room Settings -> Enable recording. I have created a room for you to test it – check this link. Have in mind that conference mode is enabled also and the camera is enabled too. If you try the recording you can disable your camera from the icon. You can also create your own room to test from the dashboard with demo/demo credentials.
2. It is uploaded to the server in chunks run-time, so it can handle large files.
3. Yes. If you are an admin account or admin tenant, you will be able to see recorded videos from the Recordings section of the dashboard:
Email me if you have more questions or other doubts.
Regards
Thanks, i can see from the image above that the saved video is not .mp4? is there a way to convert to .mp4 on server?
this is good script, however, you need to make the setting of avatars simpler…For example, after selecting an avatar, you have to scroll so much to the bottom to continue with the process.
secondly, which has more features, this one or your other live video confrencing script? as in, is the other script intergrated with this one as an addon ?
Hello,
Thank you for your intereset and feedback. Can you please share how do you see a simplified process? Adding a Save/Save and Run button at the top? Adding a wizard instead of tabs? Or something else?
About the other conferencing tool, no, this product is not included in the other, they are completely different products.
Email me if you have more questions or concerns.
Regards
those avatar settings and options and the process steps can be a sticky sidebar or something…. I also noticed that the models used have a big delay in communicating and in undertsanding the language. I compared this with another script here that isnt simillar to yours but has the features for reatime voice and much smoother no interpretation errors…Its Mgicai you can try the realtime voice chat and see how it responds. If this can have similar response, it’ll be a great thing since its purely focused on avatars..
Hi! Thank you for your suggestion – I’ll certainly consider ways to simplify the process.
Could you please clarify what you mean by “understanding the language”? If you are asking about using a language other than English, you may need to set the appropriate voice for that language. You can do this by going to Rooms → AI Avatars and selecting the language you’d like the avatar to speak as well as your own language.
Regarding response time, from my testing, the delay between when you speak and when the avatar responds is less than two seconds, so the lag appears quite minimal. If you were referring to another issue or if there’s something else you’re noticing, please let me know – I’ll be happy to help!
Let me know if you have more questions or feedback.
Regards
Hi,
before buying I want to confirm
Can I train or customize the AI avatar for specific roles (like AI receptionist, medical trainer, or HR trainer)?
Can I upload my own training material so the AI learns and replies from that?
Does it support adding a custom knowledge base or my fine-tuned OpenAI model?
During a live video conference, will the AI avatar respond to participants in real time?
Thanks!
Hello,
Thank you for your interest in LiveSmart!
- Yes, you can train an avatar. Have in mind that the main LLM is over OpenAI and you can use its great possibilities, for example you can add an assistant to your OpenAI dashboard, attach multiple documents there and assigne the avatar to this assistant. For more details about the training procedure, please check this article.
- Yes, in your OpenAI dashboard to an assitant, and then assign this assistant to the avatar:
- Yes, you can use your fine tuned OpenAI model. Models can be updated from the Settings section.
- Just to clarify. This software is only 1:1. You can enable the conference mode, but it is only your camera enabled. https://www.new-dev.com/videoai/conferenceIf you want a real conference software with AI video agent, you can check my other product – LiveSmart Video Server, this is a standalone video conferencing and streaming tool with enormoous set of features including video AI avatar, chat with ChatGPT and so on. Here is a room where you can do a conference with an avatar. Open this meeting and join on another incognito tab or another device here as a regular attendee.
Please email me if you have more questions or if you have any other concerns.
Regards
Hi, I sent a message to this contact@livesmart.video, please check
Hello,
Thank you for your interest! I have just replied.
Regards
What languages can avatars speak? I tried speaking Russian, Ukrainian, and Polish, but this girl doesn’t understand anything.
Hello,
Yes, the avatar understands any language but you need to point it. When you create the room, you can change the language you speak from the drop down named “Language you speak”. Also you can change the language run-time in a meeting from the A-Z button.
Please email me if you have more questions or any other concerns.
Regards
Check the button next right from the mic. Choose a language from there, if it is multilanguage you may speak and the avatar will understand you. Better approach is to login to the dashboard and create a new room with a new avatar and choose your spoken language from the drop-down.
You can contact me from the form here https://codecanyon.net/user/nicky75
Regards
Thanks for the reply. Unfortunately, the first option doesn’t include country or language information, only names and professions. The second option doesn’t have a login or password: https://www.new-dev.com/videoai/dash/loginform.php And frankly, after all these unsuccessful attempts to figure out how it works, I can’t believe it’s what I thought it was. It’s all so confusing, and it’s unclear how feasible it was. I think if I buy this script, people visiting the site will be just as confused as I was, not knowing where they are or what to do. Best wishes.
Sorry, I think I added the credentials – demo/demo. The flow is actually straight forward and easy.
I created an avatar that speaks Russian – https://www.new-dev.com/videoai/nom03wstpj
This is an avatar from D-ID. Here is a Russian avatar from HeyGen – https://www.new-dev.com/videoai/eq1n9i44dx
In the dashboard you can choose between 100+ avatars from different locales, speech rates, emotions and many other options. Check this article for more information.
Regards
You were right, from the video panel it was shown only names, due to the information from HeyGen. I changed the code a bit and now you can change the voice and locale runtime. Please check there, you can change the locale to Russian, Ukranian or Polish.
Regards
Thanks, brother. Could you please tell me if the script itself can be in multiple languages? For example, English and Russian? https://prnt.sc/09P90VWzcqzW Where do I choose the language?
This is for the integration for not moderated chat with the Dolphin AI model. Will check this a bit later.
You want to translate the panel or you need the avatar to be multilang? Contact me from the form please.
Okay, thanks, if what I’m writing is bothering you, delete it. There’s a flag there.
sure not, but we can discuss this better over email. I am going away and will not be around for a couple of hours.
Hello please when are you going to release update with simli.com?
Hello,
Simli integration will not be included in the coming update due to some challenges with implementing the API to the current code.
The coming release will introduce usage of other AI models other than OpenAI.
Regards
Will this take Audio input ? or only text via chat ?
Hello,
Yes, you can voice and the avatar will understand you. Please have in mind that you change the locale from the inmeeting icon, or create a new avatar with different native language and locale from the dashboard. Email me if you have more questions or concerns.
Regards
Thanks for reply, How can i test the voice conversation ?
You can check the main demo from here. If you want to generate another avatar, you can login to the dashboard with demo/demo, go to Rooms -> AI Avatars and choose an avatar, voice, background, change attitide, etc. For more information on how to use the product, you can check this article. Email me if you have more questions or concerns.
Regards
i always get the same error message when accessing the room (my chatgpt API still has 20$ credits, 10 heygen credits): “Error with avatar interaction”
Contact me from the support form and please provide URL of the meeting. Also you can check for more details about the error in you dashboard -> Logs -> Session logs.
When I first bought this script, I actually did not understand the how high the pricing was so I deleted the project when i first bought it a year ago. one year later, I want to try again. I wanted to build AI Math helper bots (algerbra, geometry, ETC). I successfully installed the script and im getting an error that I have an invalid purchase code, but there is no active websites utilizing this code. Can you please help me with this?
Hi,
I have cleaned up the bind to your site, you can activate the new instance.
PS Thank you for the great review.
Regards
thank you so kindly for the quickness!!!!! May I please ask – am I able to do that idea with this script? Would customers be able to apy on the site to use the ai?
sorry for the typo, I meant, Would I be able to allow customers to pay to utilize the chat bots I make?
Hello,
Yes, you can. Please contact me from the contact form, so I can help you on how to do this.
Regards
perfect, i messaged you thank you
Thank you, I have emailed back.
I check the Heygen API cost 99USD can use 100-500 minute video live… I think this cost is higher than pay for real person work…
Hi,
From here, you can check that 1 credit is 5 minutes of speaking, not for a session time. And, yes, AI is expensive 
But you have an alternative – to use the avatar with ElevenLabs, it is audio only avatar much more cheap. Even you can use it for free, using the browser voice synthesis.
Just to mention that I am working on integration with another video avatar API – simli.com as an alternative to HeyGen.
Email me if you have other questions or doubts.
Regards