Doutorado em Ciência da Computação
URI Permanente para esta coleção
Nível: Doutorado
Ano de início:
Conceito atual na CAPES:
Ato normativo:
Periodicidade de seleção:
Área(s) de concentração:
Url do curso:
Navegar
Navegando Doutorado em Ciência da Computação por Assunto "Agregação"
Agora exibindo 1 - 1 de 1
Resultados por página
Opções de Ordenação
- ItemIntegrando crowdsourcing e computação humana para tarefas complexas de anotação de vídeos(Universidade Federal do Espírito Santo, 2019-10-02) Amorim, Marcello Novaes de; Santos, Celso Alberto Saibel; https://orcid.org/0000000232875843; http://lattes.cnpq.br/7614206164174151; https://orcid.org/; http://lattes.cnpq.br/7419525198796496 ; Ferraz, Carlos Andre Guimaraes; https://orcid.org/; http://lattes.cnpq.br/7716805104151473; Krohling, Renato Antonio; https://orcid.org/0000-0001-8861-4274; http://lattes.cnpq.br/5300435085221378; Goularte, Rudinei; https://orcid.org/0000-0003-1531-1576; http://lattes.cnpq.br/2854771102810220; Villaca, Rodolfo da Silva; https://orcid.org/0000000280513978; http://lattes.cnpq.br/3755692723547807Video annotation is an activity that aims to supplement this type of multimedia object with additional content or information about its context, nature, content, quality and other aspects. These annotations are the basis for building a variety of multimedia applications for various purposes ranging from entertainment to security. There are automatic methods for video annotation. However, these methods require specific conditions and features that are not even found in actual application scenarios. Manual annotation is a strategy that uses the intelligence and workforce of people in the annotation process and is an alternative to cases where automatic methods cannot be applied. However, manual video annotation can be a costly process because as the content to be annotated increases, so does the workload for annotating. Crowdsourcing appears as a viable solution strategy in this context because it relies on outsourcing the tasks to a multitude of workers, who perform specific parts of the work in a distributed way. However, as the complexity of required media annoyances increases, it becomes necessary to employ skilled labor, or willing to perform larger, more complicated, and more time-consuming tasks. This makes it challenging to use crowdsourcing, as experts demand higher pay, and recruiting tends to be a difficult activity. In order to overcome this problem, strategies based on the decomposition of the main problem into a set of simpler subtasks suitable for crowdsourcing processes have emerged. These smaller tasks are organized in a workflow so that the execution process can be formalized and controlled. In the literature, there are some different proposals for the construction of this type of workflow, but each of them presents limitations that still have to be overcome. In this sense, this thesis aims to present a new framework that allows the use of crowdsourcing to create applications that require complex video annotation tasks. The developed framework considers the whole process from the definition of the problem and the decomposition of the tasks, until the construction, execution, and management of the workflow. This framework, called CrowdWaterfall, contemplates the strengths of current proposals, incorporating new concepts, techniques, and resources to overcome some of its limitations