In this paper, we propose a trust-region algorithm to minimize a nonlinear function f: R^n -> R subject to nonlinear equality constraints hi (x)=0, i=1, ..., m where hi: R^n -> R. We are concerned with the fact that n and m may be large. We adopt the approach taken in Vardi (1985). We also replace the l2-norm in the trust-region constraint by either a polyhedral norm l1 or linfinity, an arbitrary lp-norm with p >= 2, or an arbitrary convex combination of these norms. In particular, when polyhedral norms are used, the algorithm can be viewed as a sequential quadratic programming method or a sequential linear programming method regarding on whether or not we use second order information in the local model subproblem. At each iteration, the local model subproblem is only solved within some tolerance. Instead of the regularity assumption of linear independent gradients, we assume that the systems of linearized constraints are consistent. Also, we assume that the functions f and hi, i=1 ..., m , are only continuously differentiable. We demonstrate that any accumulation point of the iteration sequence, obtained from a remote starting point, is a Karush-Kuhn-Tucker point of the constrained minimization problem. This convergence theory follows from very powerful and natural properties of the trust-region strategy, for example a property we call local uniform decrease.