![PyPI Now Supports iOS and Android Wheels for Mobile Python Development](https://cdn.sanity.io/images/cgdhsj6q/production/96416c872705517a6a65ad9646ce3e7caef623a0-1024x1024.webp?w=400&fit=max&auto=format)
Security News
PyPI Now Supports iOS and Android Wheels for Mobile Python Development
PyPI now supports iOS and Android wheels, making it easier for Python developers to distribute mobile packages.
uniapp-text-to-speech
Advanced tools
https://platform.minimaxi.com/user-center/basic-information/interface-key
npm install uniapp-text-to-speech
import SpeechSynthesisUtil from 'uniapp-text-to-speech';
// 初始化
const tts = new SpeechSynthesisUtil({
API_KEY: 'your_minimax_api_key', // Minimax API密钥
GroupId: 'your_group_id', // Minimax 组ID
MAX_QUEUE_LENGTH: 3, // 可选:音频队列最大长度
modelConfig: { // 可选:音频生成配置
model: 'speech-01-240228',
voice_setting: {
"voice_id": "female-tianmei",
"speed": 1,
"vol": 1,
}
},
// 其他配置...
}
});
// 基础播放
try {
await tts.textToSpeech('你好,世界!');
} catch (error) {
console.error('语音合成失败:', error);
}
import SpeechSynthesisUtil from "uniapp-text-to-speech";
// 初始化
const tts = new SpeechSynthesisUtil({
API_KEY: "your_minimax_api_key", // Minimax API密钥
GroupId: "your_group_id", // Minimax 组ID
});
const mockTexts = ['你好,', '我是', '人工智能助手,', '很高兴认识你!'];
try {
for (const text of mockTexts) {
await tts.processText(text);
}
await tts.flushRemainingText();
} catch (error) {
addLog(`分段播放失败: ${error.message}`);
}
import { EventType } from "uniapp-text-to-speech";
// 监听合成开始
tts.on(EventType.SYNTHESIS_START, ({ text }) => {
console.log(`开始合成文本: ${text}`);
});
// 监听播放开始
tts.on(EventType.AUDIO_PLAY, ({ currentText }) => {
console.log(`正在播放: ${currentText}`);
status.value = "播放中";
});
// 监听播放结束
tts.on(EventType.AUDIO_END, ({ finishedText }) => {
console.log(`播放完成: ${finishedText}`);
status.value = "就绪";
progress.value = 100;
});
// 监听错误
tts.on(EventType.ERROR, ({ error }) => {
console.log(`错误: ${error.message}`);
status.value = "错误";
});
// 监听暂停
tts.on(EventType.PAUSE, () => {
console.log("播放已暂停");
status.value = "已暂停";
isPaused.value = true;
});
// 监听恢复
tts.on(EventType.RESUME, () => {
console.log("播放已恢复");
status.value = "播放中";
isPaused.value = false;
});
// 暂停播放
tts.pause();
// 恢复播放
tts.resume();
// 切换播放/暂停状态
tts.togglePlay();
// 自动按标点符号分段处理长文本
await tts.processText("这是第一句话。这是第二句话!这是第三句话?");
// 强制处理剩余未播放的文本
await tts.flushRemainingText();
// 重置文本处理器
tts.resetTextProcessor();
// 获取当前状态
const state = tts.getState();
console.log("是否正在播放:", state.isPlaying);
console.log("是否已暂停:", state.isPaused);
// 重置所有状态
tts.reset();
参数 | 类型 | 必填 | 说明 |
---|---|---|---|
API_KEY | string | 是 | Minimax API 密钥 |
GroupId | string | 是 | Minimax 组 ID |
MAX_QUEUE_LENGTH | number | 否 | 音频队列最大长度,默认为 3 |
modelConfig | object | 否 | 合成语音配置,参考minimaxi |
事件名 | 说明 | 回调参数 |
---|---|---|
SYNTHESIS_START | 开始合成 | { text: string } |
SYNTHESIS_END | 合成结束 | { text: string } |
AUDIO_PLAY | 开始播放单个音频片段 | { text: string } |
AUDIO_END | 所有音频播放完成 | { text: string } |
PAUSE | 暂停播放 | - |
RESUME | 恢复播放 | - |
ERROR | 发生错误 | { error: Error } |
AUDIO_PLAY
: 每个音频片段开始播放时触发AUDIO_END
: 仅在所有音频片段都播放完成后触发一次import SpeechSynthesisUtil, { EventType } from "uniapp-text-to-speech";
const tts = new SpeechSynthesisUtil({
API_KEY: "your_minimax_api_key",
GroupId: "your_group_id",
modelConfig: {
model: "speech-01-240228",
voice_setting: {
voice_id: "female-yujie", // 默认使用悦姐声音
speed: 1.2,
vol: 1,
},
},
});
// 监听播放完成事件
tts.on(EventType.AUDIO_END, ({ text }) => {
console.log("所有音频播放完成,最后播放的文本:", text);
});
// 分段播放示例
async function playMultipleTexts() {
await tts.processText("第一段文本");
await tts.processText("第二段文本");
await tts.flushRemainingText(); // 确保所有文本都被处理
}
// 重置播放状态
tts.reset();
AUDIO_END
事件只会在所有音频片段播放完成后触发一次reset()
方法可以重置所有播放状态和计数器方法名 | 说明 | 参数 | 返回值 |
---|---|---|---|
textToSpeech | 文本转语音 | text: string | Promise |
processText | 处理长文本 | text: string | Promise |
pause | 暂停播放 | - | void |
resume | 恢复播放 | - | void |
togglePlay | 切换播放状态 | - | void |
reset | 重置所有状态 | - | void |
on | 添加事件监听 | event: EventType, callback: Function | void |
off | 移除事件监听 | event: EventType, callback: Function | void |
<template>
<div class="speech-demo">
<!-- 基础演示区域 -->
<section class="demo-section">
<h3>基础演示</h3>
<textarea v-model="basicText" placeholder="请输入要转换的文本"></textarea>
<button @click="handleBasicSpeech">开始播放</button>
</section>
<!-- 分段演示区域 -->
<section class="demo-section">
<h3>分段播放演示</h3>
<div class="segment-container">
<div v-for="(text, index) in mockTexts" :key="index" class="segment">
<span>{{ text }}</span>
</div>
</div>
<button @click="handleSegmentSpeech">分段播放</button>
</section>
<!-- 高级功能演示区域 -->
<section class="demo-section">
<h3>高级功能演示</h3>
<div class="controls">
<button @click="handleTogglePlay">{{ isPaused ? '继续' : '暂停' }}</button>
<button @click="handleReset">重置</button>
</div>
<div class="status">
<p>当前状态: {{ status }}</p>
<p>播放进度: {{ progress }}%</p>
</div>
</section>
<!-- 事件日志区域 -->
<section class="demo-section">
<h3>事件日志</h3>
<div class="log-container">
<div v-for="(log, index) in eventLogs" :key="index" class="log-item">
{{ log }}
</div>
</div>
</section>
</div>
</template>
<script setup lang="ts">
import { ref, onMounted, onBeforeUnmount } from 'vue';
import SpeechSynthesisUtil, { EventType } from 'uniapp-text-to-speech';
// 响应式状态
const basicText = ref('你好,这是一个基础示例。');
const mockTexts = ref(['你好,', '我是', '人工智能助手,', '很高兴认识你!']);
const status = ref('就绪');
const progress = ref(0);
const isPaused = ref(false);
const eventLogs = ref<string[]>([]);
// 初始化语音工具
const tts = new SpeechSynthesisUtil({
API_KEY: 'your_minimax_api_key', // Minimax API密钥
GroupId: 'your_group_id', // Minimax 组ID
modelConfig: {
model: 'speech-01-240228',
voice_setting: {
voice_id: "female-yujie",
speed: 1,
vol: 1
}
}
});
// 添加日志
const addLog = (message : string) => {
eventLogs.value.unshift(`${new Date().toLocaleTimeString()}: ${message}`);
if (eventLogs.value.length > 10) {
eventLogs.value.pop();
}
};
// 设置事件监听
const setupEventListeners = () => {
// 监听合成开始
tts.on(EventType.SYNTHESIS_START, ({ text }) => {
addLog(`开始合成文本: ${text}`);
});
// 监听播放开始
tts.on(EventType.AUDIO_PLAY, ({ currentText }) => {
addLog(`正在播放: ${currentText}`);
status.value = '播放中';
});
// 监听播放结束
tts.on(EventType.AUDIO_END, ({ finishedText }) => {
addLog(`播放完成: ${finishedText}`);
status.value = '就绪';
progress.value = 100;
});
// 监听错误
tts.on(EventType.ERROR, ({ error }) => {
addLog(`错误: ${error.message}`);
status.value = '错误';
});
// 监听暂停
tts.on(EventType.PAUSE, () => {
addLog('播放已暂停');
status.value = '已暂停';
isPaused.value = true;
});
// 监听恢复
tts.on(EventType.RESUME, () => {
addLog('播放已恢复');
status.value = '播放中';
isPaused.value = false;
});
};
// 基础播放示例
const handleBasicSpeech = async () => {
try {
await tts.textToSpeech(basicText.value);
} catch (error) {
addLog(`播放失败: ${error.message}`);
}
};
// 分段播放示例
const handleSegmentSpeech = async () => {
try {
for (const text of mockTexts.value) {
await tts.processText(text);
}
await tts.flushRemainingText();
} catch (error) {
addLog(`分段播放失败: ${error.message}`);
}
};
// 切换播放/暂停
const handleTogglePlay = () => {
tts.togglePlay();
};
// 重置播放
const handleReset = () => {
tts.reset();
status.value = '就绪';
progress.value = 0;
isPaused.value = false;
addLog('已重置所有状态');
};
// 生命周期钩子
onMounted(() => {
setupEventListeners();
});
onBeforeUnmount(() => {
tts.reset();
});
</script>
<style scoped>
.speech-demo {
padding: 20px;
max-width: 800px;
margin: 0 auto;
}
.demo-section {
margin-bottom: 30px;
padding: 20px;
border: 1px solid #eee;
border-radius: 8px;
}
h3 {
margin-top: 0;
margin-bottom: 15px;
color: #333;
}
textarea {
width: 100%;
height: 100px;
padding: 10px;
margin-bottom: 10px;
border: 1px solid #ddd;
border-radius: 4px;
resize: vertical;
}
button {
padding: 8px 16px;
margin-right: 10px;
border: none;
border-radius: 4px;
background-color: #4CAF50;
color: white;
cursor: pointer;
}
button:disabled {
background-color: #cccccc;
}
.segment-container {
margin-bottom: 15px;
}
.segment {
display: inline-block;
padding: 5px 10px;
margin: 5px;
background-color: #f5f5f5;
border-radius: 4px;
}
.controls {
margin-bottom: 15px;
}
.status {
padding: 10px;
background-color: #f9f9f9;
border-radius: 4px;
}
.log-container {
height: 200px;
overflow-y: auto;
padding: 10px;
background-color: #f5f5f5;
border-radius: 4px;
}
.log-item {
padding: 5px;
border-bottom: 1px solid #eee;
font-family: monospace;
}
</style>
MIT
FAQs
uniapp 文本转语音
The npm package uniapp-text-to-speech receives a total of 3 weekly downloads. As such, uniapp-text-to-speech popularity was classified as not popular.
We found that uniapp-text-to-speech demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now supports iOS and Android wheels, making it easier for Python developers to distribute mobile packages.
Security News
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.