In the context of 5G and Network Cloudification, latency is strongly influenced by both the data transmission delay and the time required for performing a certain computation – the so called response time. The talk proposes the adoption of the Data Stream Processing (DaSP) computational model to perform on-the-fly and real-time analysis of data received in the form of streams from the network. Parallelism is therefore exploited to meet bandwidth and latency performance requisites and reduce the response time. Smart Cities related applications, and more in general Complex Event Processing ones, could greatly benefit from the adoption of DaSP. Computations could be executed entirely in the network edge, exploiting multicore general-purpose devices. The double gain stands in the latency reduction – since the computation is performed locally (directly at the edge) the data transmission delay is minimized – and the improvement in the response time – thanks to the DaSP acceleration. Moreover, caching can be exploited for applications which can't be executed entirely at the edge, therefore making necessary the offloading to the cloud. The discussion also covers aspects related to the control part and the orchestration and migration of virtualized resources (computing and storage) to implement network slicing and handovers for moving User Equipments (UE). Each service should be implemented following the cloud native approach and microservices model. Thanks to this choice, flexibility and scalability are improved by exploiting containers and orchestrators (e.g., Kubernetes). As for the automation aspects and the prediction of the best virtualized resource deployment for handover related problems, the usage of federated learnind has been proposed as a viable distributed solution.